modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-31 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-31 18:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Bodolaz/Unit-4.2-final2 | Bodolaz | 2023-06-27T19:17:52Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T19:17:30Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit-4.2-final2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 26.20 +/- 25.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
leniero/gmag | leniero | 2023-06-27T19:08:30Z | 0 | 0 | diffusers | [
"diffusers",
"gmag",
"queer",
"brazil",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T23:39:36Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- gmag
- queer
- brazil
--- |
ivyraine/test_model | ivyraine | 2023-06-27T19:01:17Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"region:us"
] | null | 2023-06-27T19:00:36Z | ---
library_name: adapter-transformers
--- |
facebook/galactica-125m | facebook | 2023-06-27T19:00:15Z | 2,773 | 36 | transformers | [
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"galactica",
"arxiv:1810.03993",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-11-16T13:21:41Z | ---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 125M (mini)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
``` |
facebook/data2vec-audio-large-100h | facebook | 2023-06-27T18:52:19Z | 80 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"data2vec-audio",
"automatic-speech-recognition",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-02T16:00:42Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Large-100h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-100h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-large-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
derek-thomas/distilhubert-finetuned-gtzan | derek-thomas | 2023-06-27T18:47:55Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-27T16:58:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7072
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0694 | 1.0 | 57 | 2.0452 | 0.42 |
| 1.6795 | 2.0 | 114 | 1.5549 | 0.55 |
| 1.1745 | 3.0 | 171 | 1.2160 | 0.73 |
| 1.1069 | 4.0 | 228 | 1.0979 | 0.73 |
| 0.7755 | 5.0 | 285 | 0.9282 | 0.73 |
| 0.7111 | 6.0 | 342 | 0.8393 | 0.78 |
| 0.5609 | 7.0 | 399 | 0.7911 | 0.79 |
| 0.4891 | 8.0 | 456 | 0.7098 | 0.81 |
| 0.518 | 9.0 | 513 | 0.7079 | 0.8 |
| 0.5737 | 10.0 | 570 | 0.7072 | 0.81 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jumartineze/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos | Jumartineze | 2023-06-27T18:15:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T16:54:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0252
- F1: 0.5395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0599 | 1.0 | 766 | 1.0518 | 0.5080 |
| 0.9391 | 2.0 | 1532 | 1.0252 | 0.5395 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
numanBot/customer_feedback_summarization | numanBot | 2023-06-27T18:11:20Z | 61 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-27T18:03:46Z | from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = TFAutoModelForSeq2SeqLM("numanBot/customer_feedback_summarization") |
maidacundo/falcon_qlora_r2_sql_no_schema | maidacundo | 2023-06-27T18:09:11Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:spider",
"license:apache-2.0",
"region:us"
] | null | 2023-06-27T17:13:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- spider
model-index:
- name: falcon_qlora_r2_sql_no_schema
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_qlora_r2_sql_no_schema
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the spider dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 43.7
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MindNetML/ppo-SnowballTarget | MindNetML | 2023-06-27T17:55:01Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-06-27T17:54:55Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MindNetML/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
winterForestStump/Roberta-fake-news-detector | winterForestStump | 2023-06-27T17:53:47Z | 137 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"en",
"license:gpl-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T09:46:03Z | ---
license: gpl-2.0
language:
- en
tags:
- text-classification
widget:
- text: "According to the former prime minister of Italy, Mario Draghi, no one in the EU needs peace or negotiations, only the total defeat of Russia, and the destroyed Ukraine will just be collateral damage of the EU ambitions."
example_title: "Fake news"
---
# Fake News Recognition
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned Roberta model 'jy46604790/Fake-News-Bert-Detect' (https://huggingface.co/jy46604790/Fake-News-Bert-Detect).
This model is trained by 8 000 news articles from https://euvsdisinfo.eu/ portal.
It can give result by simply entering the text of the news less than 512 words(the excess will be truncated automatically).
Labels:
* 0: Fake news
* 1: Real news
## How to Get Started with the Model
Use the code below to get started with the model.
### Download The Model
```
from transformers import pipeline
MODEL = "winterForestStump/Roberta-fake-news-detector"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```
text = "From the very beginning, the EU has been extremely non-transparent. The deployment of the European Union presence in Armenia was carried out forcefully, under serious pressure from Brussels"
```
### Result
```
result = clf(text)
result
```
### Output
```
[{'label': 'FAKE', 'score': 0.9999946355819702}]
```
About the data source EUVSDISINFO.eu:
Using data analysis and media monitoring services in multiple languages, EUvsDisinfo identifies, compiles, and exposes disinformation cases originating in pro-Kremlin outlets. These cases (and their disproofs) are collected in the EUvsDisinfo database – the only searchable, open-source repository of its kind. The database is updated every week.
|
chriskim2273/test_headline_qa | chriskim2273 | 2023-06-27T17:53:46Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-27T17:31:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test_headline_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_headline_qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.7992 |
| No log | 2.0 | 4 | 5.7051 |
| No log | 3.0 | 6 | 5.6068 |
| No log | 4.0 | 8 | 5.5043 |
| No log | 5.0 | 10 | 5.3968 |
| No log | 6.0 | 12 | 5.2848 |
| No log | 7.0 | 14 | 5.1784 |
| No log | 8.0 | 16 | 5.0876 |
| No log | 9.0 | 18 | 5.0222 |
| No log | 10.0 | 20 | 4.9920 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zofiski/squad-bloom-3b | zofiski | 2023-06-27T17:36:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-27T17:36:33Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Nekochu/Wav2Lip | Nekochu | 2023-06-27T17:32:53Z | 0 | 1 | null | [
"arxiv:2008.10010",
"region:us"
] | null | 2023-06-27T17:25:26Z | Original upload: https://github.com/Rudrabha/Wav2Lip
# **Wav2Lip**: *Accurately Lip-syncing Videos In The Wild*
For commercial requests, please contact us at [email protected] or [email protected]. We have an HD model ready that can be used commercially.
This code is part of the paper: _A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild_ published at ACM Multimedia 2020.
[](https://paperswithcode.com/sota/lip-sync-on-lrs2?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrs3?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrw?p=a-lip-sync-expert-is-all-you-need-for-speech)
|📑 Original Paper|📰 Project Page|🌀 Demo|⚡ Live Testing|📔 Colab Notebook
|:-:|:-:|:-:|:-:|:-:|
[Paper](http://arxiv.org/abs/2008.10010) | [Project Page](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/) | [Demo Video](https://youtu.be/0fXaDCZNOJc) | [Interactive Demo](https://bhaasha.iiit.ac.in/lipsync) | [Colab Notebook](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing) /[Updated Collab Notebook](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH)
<img src="https://drive.google.com/uc?export=view&id=1Wn0hPmpo4GRbCIJR8Tf20Akzdi1qjjG9"/>
----------
**Highlights**
----------
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy :100:. Try our [interactive demo](https://bhaasha.iiit.ac.in/lipsync).
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available :boom:
- Or, quick-start with the Google Colab Notebook: [Link](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing). Checkpoints and samples are available in a Google Drive [folder](https://drive.google.com/drive/folders/1I-0dNLfFOSFwrfqjNa-SXuwaURHE5K4k?usp=sharing) as well. There is also a [tutorial video](https://www.youtube.com/watch?v=Ic0TBhfuOrA) on this, courtesy of [What Make Art](https://www.youtube.com/channel/UCmGXH-jy0o2CuhqtpxbaQgA). Also, thanks to [Eyal Gruss](https://eyalgruss.com), there is a more accessible [Google Colab notebook](https://j.mp/wav2lip) with more useful features. A tutorial collab notebook is present at this [link](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH).
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [[`evaluation/` folder of this repo]](https://github.com/Rudrabha/Wav2Lip/tree/master/evaluation) released. Instructions to calculate the metrics reported in the paper are also present.
--------
**Disclaimer**
--------
All results from this open-source code or our [demo website](https://bhaasha.iiit.ac.in/lipsync) should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibhited. For commercial requests please contact us directly!
Prerequisites
-------------
- `Python 3.6`
- ffmpeg: `sudo apt-get install ffmpeg`
- Install necessary packages using `pip install -r requirements.txt`. Alternatively, instructions for using a docker image is provided [here](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668). Have a look at [this comment](https://github.com/Rudrabha/Wav2Lip/issues/131#issuecomment-725478562) and comment on [the gist](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668) if you encounter any issues.
- Face detection [pre-trained model](https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth) should be downloaded to `face_detection/detection/sfd/s3fd.pth`. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoojcIV0NULXV-yiuiIg?e=qTasa8) if the above does not work.
Getting the weights
----------
| Model | Description | Link to the model |
| :-------------: | :---------------: | :---------------: |
| Wav2Lip | Highly accurate lip-sync | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
| Expert Discriminator | Weights of the expert discriminator | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |
Lip-syncing videos using the pre-trained models (Inference)
-------
You can lip-sync any video to any audio:
```bash
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
```
The result is saved (by default) in `results/result_voice.mp4`. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by `FFMPEG` containing audio data: `*.wav`, `*.mp3` or even a video file, from which the code will automatically extract the audio.
##### Tips for better results:
- Experiment with the `--pads` argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g. `--pads 0 20 0 0`.
- If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the `--nosmooth` argument and give another try.
- Experiment with the `--resize_factor` argument, to get a lower resolution video. Why? The models are trained on faces which were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too).
- The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
Preparing LRS2 for training
----------
Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets.
##### LRS2 dataset folder structure
```
data_root (mvlrs_v1)
├── main, pretrain (we use only main folder in this work)
| ├── list of folders
| │ ├── five-digit numbered video IDs ending with (.mp4)
```
Place the LRS2 filelists (train, val, test) `.txt` files in the `filelists/` folder.
##### Preprocess the dataset for fast training
```bash
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
```
Additional options like `batch_size` and number of GPUs to use in parallel to use can also be set.
##### Preprocessed LRS2 folder structure
```
preprocessed_root (lrs2_preprocessed)
├── list of folders
| ├── Folders with five-digit numbered video IDs
| │ ├── *.jpg
| │ ├── audio.wav
```
Train!
----------
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
##### Training the expert discriminator
You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it:
```bash
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
```
##### Training the Wav2Lip models
You can either train the model without the additional visual quality disriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
```bash
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
```
To train with the visual quality discriminator, you should run `hq_wav2lip_train.py` instead. The arguments for both the files are similar. In both the cases, you can resume training as well. Look at `python wav2lip_train.py --help` for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the `hparams.py` file.
Training on datasets other than LRS2
------------------------------------
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
When raising an issue on this topic, please let us know that you are aware of all these points.
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
Evaluation
----------
Please check the `evaluation/` folder for the instructions.
and Citation
----------
Theis repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at [email protected] or [email protected]. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
```
@inproceedings{10.1145/3394171.3413532,
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413532},
doi = {10.1145/3394171.3413532},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {484–492},
numpages = {9},
keywords = {lip sync, talking face generation, video generation},
location = {Seattle, WA, USA},
series = {MM '20}
}
```
Acknowledgements
----------
Parts of the code structure is inspired by this [TTS repository](https://github.com/r9y9/deepvoice3_pytorch). We thank the author for this wonderful code. The code for Face Detection has been taken from the [face_alignment](https://github.com/1adrianb/face-alignment) repository. We thank the authors for releasing their code and models. We thank [zabique](https://github.com/zabique) for the tutorial collab notebook.
|
mnavas/beto-finetuned-token-reqadjzar | mnavas | 2023-06-27T17:23:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-07T15:07:50Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-token-reqadjzar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-token-reqadjzar
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1061
- Precision: 0.2533
- Recall: 0.3333
- F1: 0.2879
- Accuracy: 0.8498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7331 | 1.0 | 24 | 0.5920 | 0.0 | 0.0 | 0.0 | 0.7532 |
| 0.4759 | 2.0 | 48 | 0.3954 | 0.0085 | 0.0175 | 0.0115 | 0.8321 |
| 0.3186 | 3.0 | 72 | 0.5127 | 0.0188 | 0.0702 | 0.0296 | 0.8159 |
| 0.1906 | 4.0 | 96 | 0.4865 | 0.1190 | 0.2632 | 0.1639 | 0.8509 |
| 0.145 | 5.0 | 120 | 0.4650 | 0.1597 | 0.3333 | 0.2159 | 0.8760 |
| 0.1107 | 6.0 | 144 | 0.5465 | 0.1062 | 0.2105 | 0.1412 | 0.8514 |
| 0.0903 | 7.0 | 168 | 0.5441 | 0.1359 | 0.2456 | 0.175 | 0.8796 |
| 0.0698 | 8.0 | 192 | 0.4353 | 0.1204 | 0.2281 | 0.1576 | 0.8842 |
| 0.0505 | 9.0 | 216 | 0.7170 | 0.19 | 0.3333 | 0.2420 | 0.8432 |
| 0.0687 | 10.0 | 240 | 0.5893 | 0.1963 | 0.3684 | 0.2561 | 0.8860 |
| 0.039 | 11.0 | 264 | 0.5877 | 0.1951 | 0.4211 | 0.2667 | 0.8780 |
| 0.0278 | 12.0 | 288 | 0.5715 | 0.2237 | 0.2982 | 0.2556 | 0.8577 |
| 0.0354 | 13.0 | 312 | 0.9535 | 0.2283 | 0.3684 | 0.2819 | 0.8532 |
| 0.024 | 14.0 | 336 | 0.6500 | 0.2169 | 0.3158 | 0.2571 | 0.8674 |
| 0.0223 | 15.0 | 360 | 0.7513 | 0.1855 | 0.4035 | 0.2541 | 0.8722 |
| 0.0156 | 16.0 | 384 | 0.6566 | 0.3 | 0.4737 | 0.3673 | 0.9012 |
| 0.0156 | 17.0 | 408 | 0.8436 | 0.2292 | 0.3860 | 0.2876 | 0.8696 |
| 0.0189 | 18.0 | 432 | 0.8043 | 0.1711 | 0.2281 | 0.1955 | 0.8181 |
| 0.0128 | 19.0 | 456 | 0.6518 | 0.1619 | 0.2982 | 0.2099 | 0.8814 |
| 0.0122 | 20.0 | 480 | 0.8418 | 0.2347 | 0.4035 | 0.2968 | 0.8793 |
| 0.0242 | 21.0 | 504 | 0.7948 | 0.2292 | 0.3860 | 0.2876 | 0.8814 |
| 0.0124 | 22.0 | 528 | 0.8059 | 0.2037 | 0.3860 | 0.2667 | 0.8842 |
| 0.0098 | 23.0 | 552 | 0.9458 | 0.1765 | 0.2632 | 0.2113 | 0.8584 |
| 0.0287 | 24.0 | 576 | 0.7110 | 0.1488 | 0.3158 | 0.2022 | 0.8825 |
| 0.0253 | 25.0 | 600 | 0.6823 | 0.2021 | 0.3333 | 0.2517 | 0.8781 |
| 0.0151 | 26.0 | 624 | 0.7382 | 0.2022 | 0.3158 | 0.2466 | 0.8791 |
| 0.0118 | 27.0 | 648 | 0.6036 | 0.2360 | 0.3684 | 0.2877 | 0.8965 |
| 0.0102 | 28.0 | 672 | 0.9152 | 0.1765 | 0.3158 | 0.2264 | 0.8446 |
| 0.0229 | 29.0 | 696 | 0.6878 | 0.2584 | 0.4035 | 0.3151 | 0.8982 |
| 0.0168 | 30.0 | 720 | 0.7333 | 0.2784 | 0.4737 | 0.3506 | 0.8937 |
| 0.0145 | 31.0 | 744 | 0.6051 | 0.1864 | 0.3860 | 0.2514 | 0.9 |
| 0.0207 | 32.0 | 768 | 0.9083 | 0.3279 | 0.3509 | 0.3390 | 0.8894 |
| 0.0191 | 33.0 | 792 | 0.6983 | 0.2222 | 0.3509 | 0.2721 | 0.8884 |
| 0.0103 | 34.0 | 816 | 0.7287 | 0.2449 | 0.4211 | 0.3097 | 0.8840 |
| 0.0091 | 35.0 | 840 | 0.5929 | 0.2184 | 0.3333 | 0.2639 | 0.8851 |
| 0.0059 | 36.0 | 864 | 0.7604 | 0.2421 | 0.4035 | 0.3026 | 0.8810 |
| 0.0035 | 37.0 | 888 | 0.9380 | 0.2143 | 0.3684 | 0.2710 | 0.8622 |
| 0.0025 | 38.0 | 912 | 0.9824 | 0.2 | 0.3509 | 0.2548 | 0.8704 |
| 0.0059 | 39.0 | 936 | 1.0658 | 0.2796 | 0.4561 | 0.3467 | 0.8669 |
| 0.0199 | 40.0 | 960 | 0.9755 | 0.1705 | 0.3860 | 0.2366 | 0.8449 |
| 0.0034 | 41.0 | 984 | 0.9697 | 0.2619 | 0.3860 | 0.3121 | 0.8656 |
| 0.0035 | 42.0 | 1008 | 1.0582 | 0.1959 | 0.3333 | 0.2468 | 0.8461 |
| 0.0088 | 43.0 | 1032 | 0.8500 | 0.1849 | 0.3860 | 0.25 | 0.8515 |
| 0.0263 | 44.0 | 1056 | 1.2832 | 0.2 | 0.3509 | 0.2548 | 0.8255 |
| 0.0088 | 45.0 | 1080 | 0.9282 | 0.2308 | 0.4211 | 0.2981 | 0.8534 |
| 0.0343 | 46.0 | 1104 | 0.7165 | 0.2222 | 0.3158 | 0.2609 | 0.8594 |
| 0.0024 | 47.0 | 1128 | 0.7355 | 0.2308 | 0.4737 | 0.3103 | 0.8782 |
| 0.0019 | 48.0 | 1152 | 0.6493 | 0.2165 | 0.3684 | 0.2727 | 0.8779 |
| 0.0009 | 49.0 | 1176 | 0.6999 | 0.1964 | 0.3860 | 0.2604 | 0.8766 |
| 0.0008 | 50.0 | 1200 | 0.7496 | 0.2062 | 0.3509 | 0.2597 | 0.8709 |
| 0.0009 | 51.0 | 1224 | 0.7670 | 0.2019 | 0.3684 | 0.2609 | 0.8750 |
| 0.0006 | 52.0 | 1248 | 0.7549 | 0.24 | 0.4211 | 0.3057 | 0.8832 |
| 0.0007 | 53.0 | 1272 | 0.7556 | 0.2706 | 0.4035 | 0.3239 | 0.8870 |
| 0.0007 | 54.0 | 1296 | 0.7188 | 0.1695 | 0.3509 | 0.2286 | 0.8833 |
| 0.0005 | 55.0 | 1320 | 0.7120 | 0.1927 | 0.3684 | 0.2530 | 0.8822 |
| 0.0009 | 56.0 | 1344 | 0.7377 | 0.2245 | 0.3860 | 0.2839 | 0.8819 |
| 0.0008 | 57.0 | 1368 | 0.7295 | 0.2277 | 0.4035 | 0.2911 | 0.8859 |
| 0.0009 | 58.0 | 1392 | 0.7158 | 0.2340 | 0.3860 | 0.2914 | 0.8900 |
| 0.0013 | 59.0 | 1416 | 0.6715 | 0.1897 | 0.3860 | 0.2543 | 0.8941 |
| 0.0006 | 60.0 | 1440 | 0.6787 | 0.21 | 0.3684 | 0.2675 | 0.8861 |
| 0.0007 | 61.0 | 1464 | 0.6794 | 0.2584 | 0.4035 | 0.3151 | 0.8940 |
| 0.0012 | 62.0 | 1488 | 0.6823 | 0.2273 | 0.3509 | 0.2759 | 0.8778 |
| 0.0008 | 63.0 | 1512 | 0.7189 | 0.2588 | 0.3860 | 0.3099 | 0.8791 |
| 0.0008 | 64.0 | 1536 | 0.7077 | 0.2371 | 0.4035 | 0.2987 | 0.8905 |
| 0.0007 | 65.0 | 1560 | 0.7201 | 0.2738 | 0.4035 | 0.3262 | 0.8860 |
| 0.0005 | 66.0 | 1584 | 0.7339 | 0.2584 | 0.4035 | 0.3151 | 0.8894 |
| 0.0005 | 67.0 | 1608 | 0.7490 | 0.2157 | 0.3860 | 0.2767 | 0.8845 |
| 0.0006 | 68.0 | 1632 | 0.7342 | 0.2162 | 0.4211 | 0.2857 | 0.8833 |
| 0.0012 | 69.0 | 1656 | 0.7287 | 0.3108 | 0.4035 | 0.3511 | 0.8895 |
| 0.0012 | 70.0 | 1680 | 0.8877 | 0.2079 | 0.3684 | 0.2658 | 0.8615 |
| 0.0007 | 71.0 | 1704 | 0.9370 | 0.2095 | 0.3860 | 0.2716 | 0.8644 |
| 0.002 | 72.0 | 1728 | 0.7715 | 0.2391 | 0.3860 | 0.2953 | 0.8677 |
| 0.0007 | 73.0 | 1752 | 0.8765 | 0.22 | 0.3860 | 0.2803 | 0.8628 |
| 0.0006 | 74.0 | 1776 | 0.8515 | 0.2371 | 0.4035 | 0.2987 | 0.8639 |
| 0.0007 | 75.0 | 1800 | 0.8448 | 0.2286 | 0.4211 | 0.2963 | 0.8633 |
| 0.0009 | 76.0 | 1824 | 0.8501 | 0.2232 | 0.4386 | 0.2959 | 0.8650 |
| 0.0007 | 77.0 | 1848 | 0.8550 | 0.2198 | 0.3509 | 0.2703 | 0.8657 |
| 0.0005 | 78.0 | 1872 | 0.7445 | 0.25 | 0.4035 | 0.3087 | 0.8780 |
| 0.0007 | 79.0 | 1896 | 0.8889 | 0.26 | 0.4561 | 0.3312 | 0.8630 |
| 0.0005 | 80.0 | 1920 | 0.8930 | 0.2812 | 0.4737 | 0.3529 | 0.8650 |
| 0.0004 | 81.0 | 1944 | 0.8678 | 0.26 | 0.4561 | 0.3312 | 0.8745 |
| 0.0005 | 82.0 | 1968 | 0.8747 | 0.2784 | 0.4737 | 0.3506 | 0.8746 |
| 0.0005 | 83.0 | 1992 | 0.8726 | 0.2872 | 0.4737 | 0.3576 | 0.8687 |
| 0.001 | 84.0 | 2016 | 0.8887 | 0.2857 | 0.4211 | 0.3404 | 0.8693 |
| 0.0006 | 85.0 | 2040 | 0.7915 | 0.2963 | 0.4211 | 0.3478 | 0.8821 |
| 0.0007 | 86.0 | 2064 | 1.0194 | 0.2857 | 0.4211 | 0.3404 | 0.8606 |
| 0.0009 | 87.0 | 2088 | 0.7594 | 0.2366 | 0.3860 | 0.2933 | 0.8777 |
| 0.0021 | 88.0 | 2112 | 0.9788 | 0.25 | 0.3333 | 0.2857 | 0.8539 |
| 0.0012 | 89.0 | 2136 | 0.8719 | 0.2093 | 0.3158 | 0.2517 | 0.8697 |
| 0.0019 | 90.0 | 2160 | 1.1859 | 0.1810 | 0.3684 | 0.2428 | 0.8111 |
| 0.001 | 91.0 | 2184 | 0.9690 | 0.2118 | 0.3158 | 0.2535 | 0.8421 |
| 0.0007 | 92.0 | 2208 | 0.9863 | 0.1880 | 0.3860 | 0.2529 | 0.8495 |
| 0.0006 | 93.0 | 2232 | 0.9942 | 0.1868 | 0.2982 | 0.2297 | 0.8641 |
| 0.0007 | 94.0 | 2256 | 1.0118 | 0.2159 | 0.3333 | 0.2621 | 0.8637 |
| 0.0007 | 95.0 | 2280 | 1.0435 | 0.2754 | 0.3333 | 0.3016 | 0.8615 |
| 0.0008 | 96.0 | 2304 | 0.9795 | 0.2471 | 0.3684 | 0.2958 | 0.8657 |
| 0.0007 | 97.0 | 2328 | 0.9189 | 0.2020 | 0.3509 | 0.2564 | 0.8807 |
| 0.0009 | 98.0 | 2352 | 0.9240 | 0.2273 | 0.3509 | 0.2759 | 0.8762 |
| 0.0005 | 99.0 | 2376 | 0.8891 | 0.2561 | 0.3684 | 0.3022 | 0.8821 |
| 0.0004 | 100.0 | 2400 | 0.9028 | 0.2469 | 0.3509 | 0.2899 | 0.8818 |
| 0.0004 | 101.0 | 2424 | 0.9228 | 0.2410 | 0.3509 | 0.2857 | 0.8830 |
| 0.0004 | 102.0 | 2448 | 0.9409 | 0.2278 | 0.3158 | 0.2647 | 0.8795 |
| 0.0006 | 103.0 | 2472 | 0.9777 | 0.24 | 0.3158 | 0.2727 | 0.8796 |
| 0.0005 | 104.0 | 2496 | 0.9872 | 0.2432 | 0.3158 | 0.2748 | 0.8791 |
| 0.0006 | 105.0 | 2520 | 0.9820 | 0.2329 | 0.2982 | 0.2615 | 0.8746 |
| 0.0006 | 106.0 | 2544 | 1.0301 | 0.2879 | 0.3333 | 0.3089 | 0.8702 |
| 0.0006 | 107.0 | 2568 | 1.0468 | 0.3226 | 0.3509 | 0.3361 | 0.8637 |
| 0.0004 | 108.0 | 2592 | 1.0155 | 0.2941 | 0.3509 | 0.3200 | 0.8683 |
| 0.0005 | 109.0 | 2616 | 0.9970 | 0.2821 | 0.3860 | 0.3259 | 0.8678 |
| 0.0004 | 110.0 | 2640 | 1.0453 | 0.28 | 0.3684 | 0.3182 | 0.8687 |
| 0.0009 | 111.0 | 2664 | 0.9247 | 0.2278 | 0.3158 | 0.2647 | 0.8747 |
| 0.0006 | 112.0 | 2688 | 0.8811 | 0.2785 | 0.3860 | 0.3235 | 0.8921 |
| 0.0005 | 113.0 | 2712 | 0.9462 | 0.1905 | 0.2807 | 0.2270 | 0.8817 |
| 0.0005 | 114.0 | 2736 | 0.9685 | 0.2078 | 0.2807 | 0.2388 | 0.8792 |
| 0.0006 | 115.0 | 2760 | 1.0339 | 0.2712 | 0.2807 | 0.2759 | 0.8672 |
| 0.0004 | 116.0 | 2784 | 1.0155 | 0.2571 | 0.3158 | 0.2835 | 0.8687 |
| 0.0005 | 117.0 | 2808 | 0.9998 | 0.25 | 0.3509 | 0.2920 | 0.8768 |
| 0.0006 | 118.0 | 2832 | 0.9849 | 0.2473 | 0.4035 | 0.3067 | 0.8715 |
| 0.0033 | 119.0 | 2856 | 0.7929 | 0.2376 | 0.4211 | 0.3038 | 0.8832 |
| 0.0485 | 120.0 | 2880 | 0.9585 | 0.2 | 0.2807 | 0.2336 | 0.8585 |
| 0.0114 | 121.0 | 2904 | 0.7619 | 0.2472 | 0.3860 | 0.3014 | 0.8831 |
| 0.0177 | 122.0 | 2928 | 0.7737 | 0.2881 | 0.2982 | 0.2931 | 0.8688 |
| 0.02 | 123.0 | 2952 | 1.1362 | 0.1959 | 0.3333 | 0.2468 | 0.8214 |
| 0.0056 | 124.0 | 2976 | 1.2073 | 0.3659 | 0.2632 | 0.3061 | 0.8277 |
| 0.0208 | 125.0 | 3000 | 0.8549 | 0.2162 | 0.2807 | 0.2443 | 0.8430 |
| 0.0066 | 126.0 | 3024 | 0.9482 | 0.2667 | 0.2807 | 0.2735 | 0.8383 |
| 0.0155 | 127.0 | 3048 | 0.7532 | 0.2289 | 0.3333 | 0.2714 | 0.8629 |
| 0.0091 | 128.0 | 3072 | 0.7973 | 0.2368 | 0.3158 | 0.2707 | 0.8524 |
| 0.0029 | 129.0 | 3096 | 0.8988 | 0.25 | 0.3684 | 0.2979 | 0.8621 |
| 0.0054 | 130.0 | 3120 | 0.9882 | 0.2299 | 0.3509 | 0.2778 | 0.8362 |
| 0.0037 | 131.0 | 3144 | 1.0792 | 0.2093 | 0.3158 | 0.2517 | 0.8468 |
| 0.0012 | 132.0 | 3168 | 0.9729 | 0.2632 | 0.3509 | 0.3008 | 0.8427 |
| 0.0009 | 133.0 | 3192 | 0.9521 | 0.2043 | 0.3333 | 0.2533 | 0.8416 |
| 0.0011 | 134.0 | 3216 | 0.9539 | 0.1978 | 0.3158 | 0.2432 | 0.8401 |
| 0.0006 | 135.0 | 3240 | 0.9692 | 0.2754 | 0.3333 | 0.3016 | 0.8504 |
| 0.0007 | 136.0 | 3264 | 0.9811 | 0.2603 | 0.3333 | 0.2923 | 0.8526 |
| 0.0007 | 137.0 | 3288 | 0.9732 | 0.25 | 0.3333 | 0.2857 | 0.8444 |
| 0.0004 | 138.0 | 3312 | 0.9955 | 0.2278 | 0.3158 | 0.2647 | 0.8373 |
| 0.0005 | 139.0 | 3336 | 0.9939 | 0.2466 | 0.3158 | 0.2769 | 0.8389 |
| 0.001 | 140.0 | 3360 | 1.0081 | 0.2432 | 0.3158 | 0.2748 | 0.8377 |
| 0.0006 | 141.0 | 3384 | 1.0216 | 0.2308 | 0.3158 | 0.2667 | 0.8404 |
| 0.0005 | 142.0 | 3408 | 1.0364 | 0.25 | 0.3158 | 0.2791 | 0.8332 |
| 0.0004 | 143.0 | 3432 | 1.0185 | 0.2571 | 0.3158 | 0.2835 | 0.8426 |
| 0.0006 | 144.0 | 3456 | 1.0168 | 0.2603 | 0.3333 | 0.2923 | 0.8458 |
| 0.0005 | 145.0 | 3480 | 1.0079 | 0.2754 | 0.3333 | 0.3016 | 0.8476 |
| 0.0006 | 146.0 | 3504 | 1.0080 | 0.25 | 0.3333 | 0.2857 | 0.8438 |
| 0.0004 | 147.0 | 3528 | 1.0194 | 0.2346 | 0.3333 | 0.2754 | 0.8396 |
| 0.0004 | 148.0 | 3552 | 1.0299 | 0.2262 | 0.3333 | 0.2695 | 0.8373 |
| 0.0005 | 149.0 | 3576 | 1.0331 | 0.2289 | 0.3333 | 0.2714 | 0.8387 |
| 0.0004 | 150.0 | 3600 | 1.0294 | 0.2436 | 0.3333 | 0.2815 | 0.8412 |
| 0.0004 | 151.0 | 3624 | 1.0366 | 0.2405 | 0.3333 | 0.2794 | 0.8410 |
| 0.0004 | 152.0 | 3648 | 1.0533 | 0.2468 | 0.3333 | 0.2836 | 0.8448 |
| 0.0005 | 153.0 | 3672 | 1.0379 | 0.2879 | 0.3333 | 0.3089 | 0.8458 |
| 0.0005 | 154.0 | 3696 | 1.0395 | 0.2836 | 0.3333 | 0.3065 | 0.8454 |
| 0.0004 | 155.0 | 3720 | 1.0438 | 0.2836 | 0.3333 | 0.3065 | 0.8453 |
| 0.0004 | 156.0 | 3744 | 1.0475 | 0.2879 | 0.3333 | 0.3089 | 0.8453 |
| 0.0004 | 157.0 | 3768 | 1.0558 | 0.2794 | 0.3333 | 0.304 | 0.8450 |
| 0.0004 | 158.0 | 3792 | 1.0596 | 0.2754 | 0.3333 | 0.3016 | 0.8444 |
| 0.0004 | 159.0 | 3816 | 1.0633 | 0.2836 | 0.3333 | 0.3065 | 0.8445 |
| 0.0004 | 160.0 | 3840 | 1.0653 | 0.2836 | 0.3333 | 0.3065 | 0.8445 |
| 0.0004 | 161.0 | 3864 | 1.0687 | 0.2754 | 0.3333 | 0.3016 | 0.8446 |
| 0.0004 | 162.0 | 3888 | 1.0732 | 0.2714 | 0.3333 | 0.2992 | 0.8448 |
| 0.0005 | 163.0 | 3912 | 1.0729 | 0.2568 | 0.3333 | 0.2901 | 0.8444 |
| 0.0004 | 164.0 | 3936 | 1.0764 | 0.2533 | 0.3333 | 0.2879 | 0.8436 |
| 0.0005 | 165.0 | 3960 | 1.0737 | 0.2794 | 0.3333 | 0.304 | 0.8465 |
| 0.0005 | 166.0 | 3984 | 1.0700 | 0.2754 | 0.3333 | 0.3016 | 0.8482 |
| 0.0004 | 167.0 | 4008 | 1.0679 | 0.2794 | 0.3333 | 0.304 | 0.8496 |
| 0.0005 | 168.0 | 4032 | 1.0695 | 0.2676 | 0.3333 | 0.2969 | 0.8498 |
| 0.0004 | 169.0 | 4056 | 1.0704 | 0.2714 | 0.3333 | 0.2992 | 0.8498 |
| 0.0005 | 170.0 | 4080 | 1.0716 | 0.2794 | 0.3333 | 0.304 | 0.8495 |
| 0.0004 | 171.0 | 4104 | 1.0702 | 0.2639 | 0.3333 | 0.2946 | 0.8498 |
| 0.0005 | 172.0 | 4128 | 1.0713 | 0.25 | 0.3333 | 0.2857 | 0.8491 |
| 0.0004 | 173.0 | 4152 | 1.0736 | 0.2436 | 0.3333 | 0.2815 | 0.8491 |
| 0.0005 | 174.0 | 4176 | 1.0808 | 0.2568 | 0.3333 | 0.2901 | 0.8486 |
| 0.0004 | 175.0 | 4200 | 1.0867 | 0.2639 | 0.3333 | 0.2946 | 0.8486 |
| 0.0004 | 176.0 | 4224 | 1.0899 | 0.2754 | 0.3333 | 0.3016 | 0.8486 |
| 0.0004 | 177.0 | 4248 | 1.0900 | 0.2603 | 0.3333 | 0.2923 | 0.8486 |
| 0.0005 | 178.0 | 4272 | 1.0871 | 0.2754 | 0.3333 | 0.3016 | 0.8489 |
| 0.0004 | 179.0 | 4296 | 1.0863 | 0.2794 | 0.3333 | 0.304 | 0.8492 |
| 0.0004 | 180.0 | 4320 | 1.0892 | 0.2754 | 0.3333 | 0.3016 | 0.8493 |
| 0.0004 | 181.0 | 4344 | 1.0919 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 182.0 | 4368 | 1.0933 | 0.2639 | 0.3333 | 0.2946 | 0.8490 |
| 0.0004 | 183.0 | 4392 | 1.0949 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 184.0 | 4416 | 1.0953 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 185.0 | 4440 | 1.1031 | 0.2714 | 0.3333 | 0.2992 | 0.8496 |
| 0.0004 | 186.0 | 4464 | 1.1049 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 187.0 | 4488 | 1.1082 | 0.2676 | 0.3333 | 0.2969 | 0.8495 |
| 0.0004 | 188.0 | 4512 | 1.1091 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 189.0 | 4536 | 1.1109 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 190.0 | 4560 | 1.1119 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 191.0 | 4584 | 1.1129 | 0.2603 | 0.3333 | 0.2923 | 0.8494 |
| 0.0004 | 192.0 | 4608 | 1.1139 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0005 | 193.0 | 4632 | 1.1051 | 0.2676 | 0.3333 | 0.2969 | 0.8497 |
| 0.0004 | 194.0 | 4656 | 1.1037 | 0.2639 | 0.3333 | 0.2946 | 0.8495 |
| 0.0004 | 195.0 | 4680 | 1.1045 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 196.0 | 4704 | 1.1052 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 197.0 | 4728 | 1.1057 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 198.0 | 4752 | 1.1057 | 0.2533 | 0.3333 | 0.2879 | 0.8497 |
| 0.0004 | 199.0 | 4776 | 1.1061 | 0.2533 | 0.3333 | 0.2879 | 0.8497 |
| 0.0004 | 200.0 | 4800 | 1.1061 | 0.2533 | 0.3333 | 0.2879 | 0.8498 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
tyavika/percobaan_cnnlstm | tyavika | 2023-06-27T17:10:40Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-25T04:49:10Z | ---
tags:
- generated_from_trainer
model-index:
- name: percobaan_cnnlstm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# percobaan_cnnlstm
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9961 | 1.0 | 756 | 4.4533 |
| 4.2209 | 2.0 | 1512 | 4.3710 |
| 3.861 | 3.0 | 2268 | 4.5066 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
kojitakahiro/dar | kojitakahiro | 2023-06-27T17:05:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T17:03:16Z | ---
license: creativeml-openrail-m
---
|
numanBot/summary_annotation_score | numanBot | 2023-06-27T16:45:33Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T16:32:58Z | from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFAutoModelForSequenceClassification.from_pretrained("numanBot/summary_annotation_score", num_labels=1) |
breadlicker45/dough-instruct-base-001 | breadlicker45 | 2023-06-27T16:42:18Z | 1,635 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:breadlicker45/bread-qa",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T15:39:22Z | ---
datasets:
- breadlicker45/bread-qa
--- |
jdawnduan/q-FrozenLake-v1-4x4-noSlippery | jdawnduan | 2023-06-27T16:40:29Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T16:40:26Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jdawnduan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maidh/YOUR_REPO_ID | maidh | 2023-06-27T16:34:59Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T15:44:43Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.73 +/- 5.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r WilliamADSP/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
slone/fastText-LID-323 | slone | 2023-06-27T16:28:03Z | 4 | 9 | fasttext | [
"fasttext",
"text-classification",
"language-identification",
"arxiv:2209.09368",
"region:us"
] | text-classification | 2022-09-15T06:44:18Z | ---
library_name: fasttext
tags:
- text-classification
- language-identification
---
This is a fastText-based language classification model from the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
It supports 323 languages used in Wikipedia (as of July 2022), and has extended support of the Erzya (`myv`) and Moksha (`mdf`) languages.
Example usage:
```Python
import fasttext
import urllib.request
import os
model_path = 'lid.323.ftz'
url = 'https://huggingface.co/slone/fastText-LID-323/resolve/main/lid.323.ftz'
if not os.path.exists(model_path):
urllib.request.urlretrieve(url, model_path) # or just download it manually
model = fasttext.load_model(model_path)
languages, scores = model.predict("эрзянь кель", k=3) # k is the number of returned hypotheses
```
The model was trained on texts of articles randomly sampled from Wikipedia. It works better with sentences and longer texts than with words, and may be sensitive to noise. |
chunwoolee0/xlm-roberta-base-finetuned-panx-de | chunwoolee0 | 2023-06-27T16:09:19Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-27T15:29:35Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ag159/ppo-Huggy | ag159 | 2023-06-27T16:09:09Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-27T16:08:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ag159/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
guimrtns/ppo-Huggy | guimrtns | 2023-06-27T15:55:41Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-27T14:48:43Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: guimrtns/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mnicamartins8/bert-base-uncased-with-expansion-correction | mnicamartins8 | 2023-06-27T15:47:40Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-25T23:17:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-expansion-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-expansion-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.9099
- Precision: 0.9142
- Recall: 0.9099
- F1: 0.9114
- Balanced Acc: 0.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Guilherme34/Jennifer-lora-7bvChatv2 | Guilherme34 | 2023-06-27T15:45:53Z | 0 | 1 | null | [
"tensorboard",
"pt",
"region:us"
] | null | 2023-05-16T16:40:03Z | ---
language:
- pt
---
Esta é a segunda versão Chat de uma inteligência artificial finetunada que fala em português do brasil, ela foi treinada em cima do llama 7b de decapoda, e foi treinada no LLaMA-LoRA Tuner de zetavg utilizando o dataset da cabrita lora e o alpaca cleaned e WizardLM_alpaca_evol_instruct_70k_unfiltered, também utilizou datasets do baize
Divirta-se! |
gongliyu/fine-tuned-t5-small | gongliyu | 2023-06-27T15:44:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-23T19:00:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: fine-tuned-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5422
- Precision: nan
- Recall: 0.7117
- F1: 0.5635
- Hashcode: roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2)
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Hashcode | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:------------------------------------------------------:|:-------:|
| No log | 1.0 | 1 | 12.9679 | 0.7745 | 0.7227 | 0.7474 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 2.0 | 2 | 12.1426 | 0.7811 | 0.7221 | 0.7503 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 3.0 | 3 | 11.2809 | 0.7811 | 0.7221 | 0.7503 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 4.0 | 4 | 10.4669 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 5.0 | 5 | 9.7061 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 6.0 | 6 | 9.0054 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 7.0 | 7 | 8.3875 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 8.0 | 8 | 7.8287 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 9.0 | 9 | 7.3385 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 10.0 | 10 | 6.9141 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 11.0 | 11 | 6.5516 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 12.0 | 12 | 6.2399 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 13.0 | 13 | 5.9851 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 14.0 | 14 | 5.7744 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 15.0 | 15 | 5.5976 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 16.0 | 16 | 5.4546 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 17.0 | 17 | 5.3403 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 18.0 | 18 | 5.2461 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 19.0 | 19 | 5.1688 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 20.0 | 20 | 5.1052 | 0.7922 | 0.7169 | 0.7525 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 21.0 | 21 | 5.0489 | 0.7922 | 0.7169 | 0.7525 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 22.0 | 22 | 5.0025 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 23.0 | 23 | 4.9621 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 24.0 | 24 | 4.9263 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 25.0 | 25 | 4.8933 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 26.0 | 26 | 4.8623 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 27.0 | 27 | 4.8327 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 28.0 | 28 | 4.8060 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 29.0 | 29 | 4.7811 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 30.0 | 30 | 4.7583 | 0.7712 | 0.7105 | 0.7392 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 31.0 | 31 | 4.7361 | 0.7712 | 0.7105 | 0.7392 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 32.0 | 32 | 4.7152 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 33.0 | 33 | 4.6964 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 34.0 | 34 | 4.6789 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 35.0 | 35 | 4.6627 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 36.0 | 36 | 4.6475 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 37.0 | 37 | 4.6330 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 38.0 | 38 | 4.6192 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 39.0 | 39 | 4.6066 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 40.0 | 40 | 4.5957 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 41.0 | 41 | 4.5859 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 42.0 | 42 | 4.5771 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 43.0 | 43 | 4.5693 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 44.0 | 44 | 4.5625 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 45.0 | 45 | 4.5567 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 46.0 | 46 | 4.5518 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 47.0 | 47 | 4.5480 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 48.0 | 48 | 4.5451 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 49.0 | 49 | 4.5432 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 50.0 | 50 | 4.5422 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Yhyu13/vicuna-33b-v1.3-gptq-4bit | Yhyu13 | 2023-06-27T15:37:52Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T14:46:37Z | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/lmsys/vicuna-33b-v1.3 |
rafaeljosem/DeepESP-gpt2-spanish-tripadvisor | rafaeljosem | 2023-06-27T15:35:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-25T22:23:03Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DeepESP-gpt2-spanish-tripadvisor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepESP-gpt2-spanish-tripadvisor
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8665 | 1.0 | 2089 | 0.7441 |
| 0.7336 | 2.0 | 4178 | 0.6916 |
| 0.6856 | 3.0 | 6267 | 0.6632 |
| 0.6559 | 4.0 | 8356 | 0.6446 |
| 0.6341 | 5.0 | 10445 | 0.6322 |
| 0.6169 | 6.0 | 12534 | 0.6213 |
| 0.6022 | 7.0 | 14623 | 0.6138 |
| 0.5896 | 8.0 | 16712 | 0.6096 |
| 0.5788 | 9.0 | 18801 | 0.6037 |
| 0.5692 | 10.0 | 20890 | 0.5989 |
| 0.5604 | 11.0 | 22979 | 0.5965 |
| 0.5528 | 12.0 | 25068 | 0.5941 |
| 0.5457 | 13.0 | 27157 | 0.5915 |
| 0.5392 | 14.0 | 29246 | 0.5900 |
| 0.5334 | 15.0 | 31335 | 0.5879 |
| 0.5285 | 16.0 | 33424 | 0.5875 |
| 0.524 | 17.0 | 35513 | 0.5870 |
| 0.5209 | 18.0 | 37602 | 0.5866 |
| 0.5179 | 19.0 | 39691 | 0.5867 |
| 0.5157 | 20.0 | 41780 | 0.5865 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mnicamartins8/bert-base-uncased-with-misspelling-expansion-correction | mnicamartins8 | 2023-06-27T15:34:24Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T15:27:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-misspelling-expansion-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-misspelling-expansion-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.9083
- Precision: 0.9132
- Recall: 0.9083
- F1: 0.9100
- Balanced Acc: 0.8893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
eutimio-arevalo-valarezo/ppo-LunarLander-v2 | eutimio-arevalo-valarezo | 2023-06-27T15:33:42Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T15:33:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.30 +/- 26.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
maidh/ppo-LunarLander-v2 | maidh | 2023-06-27T15:32:32Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-04-20T10:05:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.58 +/- 12.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Allenpai/llm | Allenpai | 2023-06-27T15:21:51Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-27T13:06:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
GabrielCaido/ppo-LunarLander-v2 | GabrielCaido | 2023-06-27T15:13:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T15:13:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.44 +/- 20.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ahishamm/vit-large-HAM-10000-patch-32 | ahishamm | 2023-06-27T15:13:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T14:14:58Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-HAM-10000-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-HAM-10000-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/HAM_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4810
- Accuracy: 0.8364
- Recall: 0.8364
- F1: 0.8364
- Precision: 0.8364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6405 | 0.2 | 100 | 0.7318 | 0.7481 | 0.7481 | 0.7481 | 0.7481 |
| 0.7062 | 0.4 | 200 | 0.7735 | 0.7416 | 0.7416 | 0.7416 | 0.7416 |
| 0.6334 | 0.6 | 300 | 0.6075 | 0.7781 | 0.7781 | 0.7781 | 0.7781 |
| 0.7102 | 0.8 | 400 | 0.6618 | 0.7661 | 0.7661 | 0.7661 | 0.7661 |
| 0.6814 | 1.0 | 500 | 0.5717 | 0.7890 | 0.7890 | 0.7890 | 0.7890 |
| 0.4618 | 1.2 | 600 | 0.5624 | 0.8030 | 0.8030 | 0.8030 | 0.8030 |
| 0.3824 | 1.4 | 700 | 0.5987 | 0.7766 | 0.7766 | 0.7766 | 0.7766 |
| 0.4191 | 1.6 | 800 | 0.5145 | 0.8190 | 0.8190 | 0.8190 | 0.8190 |
| 0.3998 | 1.8 | 900 | 0.5226 | 0.8090 | 0.8090 | 0.8090 | 0.8090 |
| 0.4677 | 2.0 | 1000 | 0.4927 | 0.8219 | 0.8219 | 0.8219 | 0.8219 |
| 0.2191 | 2.2 | 1100 | 0.5477 | 0.8284 | 0.8284 | 0.8284 | 0.8284 |
| 0.2302 | 2.4 | 1200 | 0.5018 | 0.8329 | 0.8329 | 0.8329 | 0.8329 |
| 0.191 | 2.59 | 1300 | 0.4810 | 0.8364 | 0.8364 | 0.8364 | 0.8364 |
| 0.1736 | 2.79 | 1400 | 0.5096 | 0.8334 | 0.8334 | 0.8334 | 0.8334 |
| 0.1049 | 2.99 | 1500 | 0.5944 | 0.8364 | 0.8364 | 0.8364 | 0.8364 |
| 0.0612 | 3.19 | 1600 | 0.5552 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0181 | 3.39 | 1700 | 0.6199 | 0.8434 | 0.8434 | 0.8434 | 0.8434 |
| 0.0816 | 3.59 | 1800 | 0.5081 | 0.8534 | 0.8534 | 0.8534 | 0.8534 |
| 0.039 | 3.79 | 1900 | 0.5349 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
| 0.0208 | 3.99 | 2000 | 0.5445 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cointegrated/rut5-base-labse-decoder | cointegrated | 2023-06-27T14:59:57Z | 125 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"russian",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-17T16:38:34Z | ---
language: ["ru"]
tags:
- russian
license: mit
---
This is the [rut5-base](https://huggingface.co/cointegrated/rut5-base) model, with the decoder fine-tuned to recover (approximately) Russian sentences from their [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) embeddings. Details are [here](https://habr.com/ru/post/677618/) (in Russian).
It can be used, for example, for:
- Paraphrasing Russian sentences;
- Translating from the 109 LaBSE languages to Russian;
- Summarizing a collection of sentences with a single sentence;
- Interpolating between sentences;
- Few-shot text style transfer (including cross-lingual).
Example code:
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoModel
from transformers.modeling_outputs import BaseModelOutput
enc_tokenizer = AutoTokenizer.from_pretrained('cointegrated/LaBSE-en-ru')
encoder = AutoModel.from_pretrained('cointegrated/LaBSE-en-ru')
dec_tokenizer = AutoTokenizer.from_pretrained('cointegrated/rut5-base-labse-decoder')
decoder = AutoModelForSeq2SeqLM.from_pretrained('cointegrated/rut5-base-labse-decoder')
def encode(texts):
encoded_input = enc_tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors='pt')
with torch.no_grad():
model_output = encoder(**encoded_input.to(encoder.device))
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings
# encode some texts into vectors
embeddings = encode([
"4 декабря 2000 года",
"Давно такого не читала, очень хорошо пишешь!",
"Я тогда не понимала, что происходит, не понимаю и сейчас.",
"London is the capital of Great Britain.",
])
print(embeddings.shape)
# torch.Size([4, 768])
# now try to recover the texts from the vectors
out = decoder.generate(
encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)),
max_length=256,
repetition_penalty=3.0,
)
for tokens in out:
print(dec_tokenizer.decode(tokens, skip_special_tokens=True))
# После 4 декабря 2000 года
# Не так давно, это многое читала!
# Я не понимала того, что происходит сейчас тогда, дальше.
# Британская столица Англии.
``` |
uf-aice-lab/git_20 | uf-aice-lab | 2023-06-27T14:58:46Z | 98 | 1 | transformers | [
"transformers",
"pytorch",
"git",
"image-text-to-text",
"image-to-text",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-06-27T14:50:51Z | ---
license: mit
language:
- en
pipeline_tag: image-to-text
---
# git_20
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned with Microsoft GIT with 1 Nvidia A100-80G GPU. We extracted 100,000 student assignments containing teacher feedback from 3 million student assignments as training data. The training data is divided into the image part of student assignments and the text part of teacher feedback. git_20 consists of 18 layers and over 170 million parameters, consuming up to 0.7 gigabytes of disk space. The project aims to use multi-modal and multi-task deep learning models to create a machine learning pipeline that provides automatic diagnostic feedback for students' mathematical reasoning. Researchers can experiment with and finetune the model to help construct multimodel that can effectively provide automatic diagnostic feedback for students' mathematical reasoning.
### Here is how to use it with texts in HuggingFace
```python
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor
from PIL import Image
model = AutoModelForCausalLM.from_pretrained("Fan21/git_20")
processor = AutoProcessor.from_pretrained("Fan21/git_20")
image_path ='Please enter the image address here'
image = Image.open(image_path)
width, height = image.size
display(image.resize((int(1 * width), int(1 * height))))
pixel_values = processor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = model.generate(pixel_values=pixel_values, max_length=50)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
``` |
nolanaatama/vllgrfrmmncrftrvcv2500pchnlgspdrwb | nolanaatama | 2023-06-27T14:52:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T14:38:47Z | ---
license: creativeml-openrail-m
---
|
Ellbendls/ppo-Pyramid | Ellbendls | 2023-06-27T14:42:01Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-06-27T14:40:32Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ellbendls/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
librarian-bots/BERTopic_model_card_bias | librarian-bots | 2023-06-27T14:21:46Z | 24 | 3 | bertopic | [
"bertopic",
"metadata",
"model cards",
"bias",
"text-classification",
"en",
"dataset:davanstrien/model_cards_with_readmes",
"license:mit",
"region:us"
] | text-classification | 2023-05-11T10:31:44Z | ---
tags:
- bertopic
- metadata
- model cards
- bias
library_name: bertopic
datasets:
- davanstrien/model_cards_with_readmes
language:
- en
license: mit
pipeline_tag: text-classification
inference: false
---
# BERTopic model card bias topic model
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/BERTopic_model_card_bias")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 11
* Number of training documents: 1271
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | evaluation - claim - reasoning - parameters - university | 13 | -1_evaluation_claim_reasoning_parameters |
| 0 | checkpoint - fairly - characterized - even - sectionhttpshuggingfacecobertbaseuncased | 13 | 0_checkpoint_fairly_characterized_even |
| 1 | generative - research - uses - processes - artistic | 137 | 1_generative_research_uses_processes |
| 2 | checkpoint - try - snippet - sectionhttpshuggingfacecobertbaseuncased - limitation | 48 | 2_checkpoint_try_snippet_sectionhttpshuggingfacecobertbaseuncased |
| 3 | meant - technical - sociotechnical - convey - needed | 32 | 3_meant_technical_sociotechnical_convey |
| 4 | gpt2 - team - their - cardhttpsgithubcomopenaigpt2blobmastermodelcardmd - worked | 32 | 4_gpt2_team_their_cardhttpsgithubcomopenaigpt2blobmastermodelcardmd |
| 5 | datasets - internet - unfiltered - therefore - lot | 27 | 5_datasets_internet_unfiltered_therefore |
| 6 | dacy - danish - pipelines - transformer - bert | 25 | 6_dacy_danish_pipelines_transformer |
| 7 | your - pythia - branch - checkpoints - provide | 20 | 7_your_pythia_branch_checkpoints |
| 8 | opt - trained - large - software - code | 15 | 8_opt_trained_large_software |
| 9 | al - et - identity - occupational - groups | 15 | 9_al_et_identity_occupational |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.0
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11 |
MariaK/whisper-tiny-minds-v5-numproc1 | MariaK | 2023-06-27T14:17:11Z | 93 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T13:53:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v5-numproc1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[451:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.37507453786523554
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v5-numproc1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Wer Ortho: 0.4102
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.4354 | 3.57 | 100 | 0.5542 | 0.4539 | 0.3870 |
| 0.066 | 7.14 | 200 | 0.5501 | 0.4059 | 0.3554 |
| 0.0086 | 10.71 | 300 | 0.6204 | 0.3953 | 0.3542 |
| 0.0028 | 14.29 | 400 | 0.6455 | 0.3990 | 0.3631 |
| 0.0022 | 17.86 | 500 | 0.6530 | 0.4102 | 0.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-HAM-10000-patch-16 | ahishamm | 2023-06-27T14:14:41Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T12:58:26Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-HAM-10000-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-HAM-10000-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/HAM_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5464
- Accuracy: 0.8095
- Recall: 0.8095
- F1: 0.8095
- Precision: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.8281 | 0.2 | 100 | 0.9228 | 0.6788 | 0.6788 | 0.6788 | 0.6788 |
| 0.912 | 0.4 | 200 | 0.8353 | 0.7147 | 0.7147 | 0.7147 | 0.7147 |
| 0.6741 | 0.6 | 300 | 0.7841 | 0.7377 | 0.7377 | 0.7377 | 0.7377 |
| 0.8472 | 0.8 | 400 | 0.6710 | 0.7566 | 0.7566 | 0.7566 | 0.7566 |
| 0.7758 | 1.0 | 500 | 0.7587 | 0.7087 | 0.7087 | 0.7087 | 0.7087 |
| 0.5388 | 1.2 | 600 | 0.6607 | 0.7746 | 0.7746 | 0.7746 | 0.7746 |
| 0.5067 | 1.4 | 700 | 0.6133 | 0.7701 | 0.7701 | 0.7701 | 0.7701 |
| 0.4992 | 1.6 | 800 | 0.6075 | 0.7786 | 0.7786 | 0.7786 | 0.7786 |
| 0.5761 | 1.8 | 900 | 0.6286 | 0.7691 | 0.7691 | 0.7691 | 0.7691 |
| 0.5892 | 2.0 | 1000 | 0.5498 | 0.8035 | 0.8035 | 0.8035 | 0.8035 |
| 0.4258 | 2.2 | 1100 | 0.5901 | 0.7940 | 0.7940 | 0.7940 | 0.7940 |
| 0.4066 | 2.4 | 1200 | 0.5553 | 0.8025 | 0.8025 | 0.8025 | 0.8025 |
| 0.3032 | 2.59 | 1300 | 0.5754 | 0.8030 | 0.8030 | 0.8030 | 0.8030 |
| 0.3843 | 2.79 | 1400 | 0.5464 | 0.8095 | 0.8095 | 0.8095 | 0.8095 |
| 0.2679 | 2.99 | 1500 | 0.5683 | 0.8100 | 0.8100 | 0.8100 | 0.8100 |
| 0.1787 | 3.19 | 1600 | 0.5931 | 0.8195 | 0.8195 | 0.8195 | 0.8195 |
| 0.105 | 3.39 | 1700 | 0.6488 | 0.8279 | 0.8279 | 0.8279 | 0.8279 |
| 0.2138 | 3.59 | 1800 | 0.6414 | 0.8130 | 0.8130 | 0.8130 | 0.8130 |
| 0.1336 | 3.79 | 1900 | 0.5920 | 0.8264 | 0.8264 | 0.8264 | 0.8264 |
| 0.1246 | 3.99 | 2000 | 0.5999 | 0.8289 | 0.8289 | 0.8289 | 0.8289 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ellbendls/ppo-SnowballTarget | Ellbendls | 2023-06-27T13:48:46Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-06-26T13:12:07Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ellbendls/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Idan0405/ClipMD | Idan0405 | 2023-06-27T13:37:21Z | 293 | 8 | transformers | [
"transformers",
"pytorch",
"clip",
"feature-extraction",
"medical",
"zero-shot-image-classification",
"custom_code",
"en",
"arxiv:2303.13340",
"doi:10.57967/hf/0898",
"region:us"
] | zero-shot-image-classification | 2023-03-22T19:39:57Z | ---
model_type: clip
tags:
- medical
language:
- en
inference: false
pipeline_tag: zero-shot-image-classification
---
# Model Card: ClipMD
## Model Details
ClipMD is a medical image-text matching model based on OpenAI's CLIP model with a sliding window text encoder.
### Model Description
The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked sliding window elf-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The model was fine-tuned on the [ROCO dataset](https://github.com/razorx89/roco-dataset).
## Use with Transformers
```
from PIL import Image
from transformers import AutoProcessor,AutoModel
model = AutoModel.from_pretrained("Idan0405/ClipMD",trust_remote_code=True)
processor = AutoProcessor.from_pretrained("Idan0405/ClipMD")
image = Image.open("your image path")
inputs = processor(text=["chest x-ray", "head MRI"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs[0] # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
# See also
* [ClipMD repository on github.](https://github.cs.huji.ac.il/tomhope-lab/ClipMD)
* [ClipMD paper on arxiv](https://arxiv.org/abs/2303.13340) |
mfaiq2307/whisper-large-cahya-peft | mfaiq2307 | 2023-06-27T13:25:28Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"automatic-speech-recognition",
"license:other",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-24T12:36:44Z | ---
license: other
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
This is a model for Indonesia audio recognition using LoRA on Whisper-large-v2. |
Jumtra/rinna-v1-tune-ep3 | Jumtra | 2023-06-27T13:23:15Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-27T12:19:44Z | ---
license: mit
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# rinna-3.6b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
MIT
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
``` |
Jumtra/calm-v3-ep3 | Jumtra | 2023-06-27T13:22:43Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-27T12:24:32Z | ---
license: cc-by-sa-4.0
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# open-calm-7b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
cc-by-sa-4.0
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
``` |
swl-models/SweetheartAnime-v1.0 | swl-models | 2023-06-27T13:22:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:18:34Z | ---
license: creativeml-openrail-m
---
|
MariaK/whisper-tiny-minds-v4-FC | MariaK | 2023-06-27T13:19:15Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T12:51:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v4-FC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v4-FC
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Wer Ortho: 0.4102
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.4354 | 3.57 | 100 | 0.5542 | 0.4539 | 0.3870 |
| 0.066 | 7.14 | 200 | 0.5501 | 0.4059 | 0.3554 |
| 0.0086 | 10.71 | 300 | 0.6204 | 0.3953 | 0.3542 |
| 0.0028 | 14.29 | 400 | 0.6455 | 0.3990 | 0.3631 |
| 0.0022 | 17.86 | 500 | 0.6530 | 0.4102 | 0.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
swl-models/SummerWind-v1.0 | swl-models | 2023-06-27T13:17:18Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:15:52Z | ---
license: creativeml-openrail-m
---
|
swl-models/CoffeescentAnime-v1.0 | swl-models | 2023-06-27T13:14:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:13:02Z | ---
license: creativeml-openrail-m
---
|
machinelearnear/falcon-7b-alpaca-lora-ca | machinelearnear | 2023-06-27T13:13:20Z | 3 | 1 | peft | [
"peft",
"region:us"
] | null | 2023-06-23T16:53:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
swl-models/EarlySpring-v1.0 | swl-models | 2023-06-27T13:12:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:09:27Z | ---
license: creativeml-openrail-m
---
|
edofut/belle | edofut | 2023-06-27T13:11:53Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | 2023-06-27T12:01:06Z | ---
license: gpl-3.0
---
from transformers import GPT3Tokenizer, GPT3ForChatbot
class JackBot:
def __init__(self):
self.tokenizer = GPT3Tokenizer.from_pretrained("gpt3.5-turbo")
self.model = GPT3ForChatbot.from_pretrained("gpt3.5-turbo")
def generate_response(self, input_text):
input_ids = self.tokenizer.encode(input_text, return_tensors="pt")
response = self.model.generate(input_ids, max_length=100, num_return_sequences=1, temperature=0.7)
response_text = self.tokenizer.decode(response[0], skip_special_tokens=True)
return response_text
bot = JackBot()
while True:
user_input = input("User: ")
if user_input.lower() == "bye":
print("Jack: Goodbye!")
break
response = bot.generate_response(user_input)
print("Jack:", response)
|
swl-models/MeimeiCartoon-v1.0 | swl-models | 2023-06-27T13:04:30Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:02:20Z | ---
license: creativeml-openrail-m
---
|
Shubham09/falcon_hcltech_p1 | Shubham09 | 2023-06-27T13:00:47Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-27T12:52:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
avecoder/marian-finetuned-kde4-en-to-ru | avecoder | 2023-06-27T13:00:25Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-27T04:40:39Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-ru
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-ru
split: train
args: en-ru
metrics:
- name: Bleu
type: bleu
value: 29.07778420930096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3767
- Bleu: 29.0778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-HAM-10000-patch-32 | ahishamm | 2023-06-27T12:58:11Z | 201 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T12:23:15Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/HAM_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5210
- Accuracy: 0.8040
- Recall: 0.8040
- F1: 0.8040
- Precision: 0.8040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7 | 0.2 | 100 | 0.7878 | 0.7307 | 0.7307 | 0.7307 | 0.7307 |
| 0.8248 | 0.4 | 200 | 0.7338 | 0.7476 | 0.7476 | 0.7476 | 0.7476 |
| 0.6647 | 0.6 | 300 | 0.6417 | 0.7541 | 0.7541 | 0.7541 | 0.7541 |
| 0.6755 | 0.8 | 400 | 0.6682 | 0.7576 | 0.7576 | 0.7576 | 0.7576 |
| 0.7443 | 1.0 | 500 | 0.6037 | 0.7890 | 0.7890 | 0.7890 | 0.7890 |
| 0.5316 | 1.2 | 600 | 0.5963 | 0.7915 | 0.7915 | 0.7915 | 0.7915 |
| 0.4404 | 1.4 | 700 | 0.5626 | 0.7955 | 0.7955 | 0.7955 | 0.7955 |
| 0.4431 | 1.6 | 800 | 0.5719 | 0.8005 | 0.8005 | 0.8005 | 0.8005 |
| 0.5011 | 1.8 | 900 | 0.5581 | 0.7880 | 0.7880 | 0.7880 | 0.7880 |
| 0.4692 | 2.0 | 1000 | 0.5210 | 0.8040 | 0.8040 | 0.8040 | 0.8040 |
| 0.2648 | 2.2 | 1100 | 0.5776 | 0.8070 | 0.8070 | 0.8070 | 0.8070 |
| 0.2723 | 2.4 | 1200 | 0.5317 | 0.8180 | 0.8180 | 0.8180 | 0.8180 |
| 0.2325 | 2.59 | 1300 | 0.5223 | 0.8170 | 0.8170 | 0.8170 | 0.8170 |
| 0.2547 | 2.79 | 1400 | 0.5314 | 0.8244 | 0.8244 | 0.8244 | 0.8244 |
| 0.146 | 2.99 | 1500 | 0.5583 | 0.8274 | 0.8274 | 0.8274 | 0.8274 |
| 0.1224 | 3.19 | 1600 | 0.5960 | 0.8289 | 0.8289 | 0.8289 | 0.8289 |
| 0.0313 | 3.39 | 1700 | 0.6081 | 0.8304 | 0.8304 | 0.8304 | 0.8304 |
| 0.104 | 3.59 | 1800 | 0.5770 | 0.8339 | 0.8339 | 0.8339 | 0.8339 |
| 0.0538 | 3.79 | 1900 | 0.5364 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0827 | 3.99 | 2000 | 0.5414 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Holmodi/Reinforce-policy-gradient | Holmodi | 2023-06-27T12:55:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T12:55:14Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-policy-gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
advokat/tiraMOEsu | advokat | 2023-06-27T12:54:27Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T12:45:53Z | ---
license: creativeml-openrail-m
---
|
MariaK/whisper-tiny-minds-v3 | MariaK | 2023-06-27T12:44:57Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T12:20:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v3
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Wer Ortho: 0.4102
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.4354 | 3.57 | 100 | 0.5542 | 0.4539 | 0.3870 |
| 0.066 | 7.14 | 200 | 0.5501 | 0.4059 | 0.3554 |
| 0.0086 | 10.71 | 300 | 0.6204 | 0.3953 | 0.3542 |
| 0.0028 | 14.29 | 400 | 0.6455 | 0.3990 | 0.3631 |
| 0.0022 | 17.86 | 500 | 0.6530 | 0.4102 | 0.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fatcat22/ppo-Pyramids | fatcat22 | 2023-06-27T12:34:19Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-06-27T12:32:19Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fatcat22/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible | ZTamas | 2023-06-27T12:34:15Z | 134 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"hu",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-02-09T11:14:23Z | ---
language:
- hu
pipeline_tag: question-answering
---
This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
```py
sentencepiece==0.1.97
protobuf==3.20.0
```
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
tokenizer = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 50 #This can be modified
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
``` |
DORA1222/cra-test0627 | DORA1222 | 2023-06-27T12:31:22Z | 0 | 0 | null | [
"ab",
"license:other",
"region:us"
] | null | 2023-06-27T12:30:02Z | ---
license: other
language:
- ab
metrics:
- accuracy
--- |
kejolong/bayonetta2.0 | kejolong | 2023-06-27T12:10:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T12:06:19Z | ---
license: creativeml-openrail-m
---
|
iioSnail/bert-base-chinese-medical-ner | iioSnail | 2023-06-27T12:09:50Z | 593 | 9 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"chinese",
"ner",
"medical",
"zh",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-25T07:39:03Z | ---
license: afl-3.0
tags:
- chinese
- ner
- medical
language:
- zh
---
# 医疗领域中文命名实体识别
项目地址:https://github.com/iioSnail/chinese_medical_ner
使用方法:
```
from transformers import AutoModelForTokenClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('iioSnail/bert-base-chinese-medical-ner')
model = AutoModelForTokenClassification.from_pretrained("iioSnail/bert-base-chinese-medical-ner")
sentences = ["瘦脸针、水光针和玻尿酸详解!", "半月板钙化的病因有哪些?"]
inputs = tokenizer(sentences, return_tensors="pt", padding=True, add_special_tokens=False)
outputs = model(**inputs)
outputs = outputs.logits.argmax(-1) * inputs['attention_mask']
print(outputs)
```
输出结果:
```
tensor([[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 4, 4],
[1, 2, 2, 2, 3, 4, 4, 4, 4, 4, 4, 4, 0, 0]])
```
其中 `1=B, 2=I, 3=E, 4=O`。`1, 3`表示一个二字医疗实体,`1,2,3`表示一个3字医疗实体, `1,2,2,3`表示一个4字医疗实体,依次类推。
可以使用项目中的`MedicalNerModel.format_outputs(sentences, outputs)`来将输出进行转换。
效果如下:
```
[
[
{'start': 0, 'end': 3, 'word': '瘦脸针'},
{'start': 4, 'end': 7, 'word': '水光针'},
{'start': 8, 'end': 11, 'word': '玻尿酸'}、
],
[
{'start': 0, 'end': 5, 'word': '半月板钙化'}
]
]
```
更多信息请参考项目:https://github.com/iioSnail/chinese_medical_ner |
TheYuriLover/airoboros-13b-gpt4-1.4-GPTQ-32g-ao-ts | TheYuriLover | 2023-06-27T12:07:09Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T08:22:46Z | This is the gptq 4bit quantization of this model: https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 32)
|
J4m35M4xw3ll/Reinforce-CartPole-v1 | J4m35M4xw3ll | 2023-06-27T12:06:51Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T12:06:40Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
joncam14/rl_course_vizdoom_health_gathering_supreme | joncam14 | 2023-06-27T11:59:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T11:44:27Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.66 +/- 6.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r joncam14/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
youngp5/skin-conditions | youngp5 | 2023-06-27T11:50:31Z | 217 | 1 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"medical",
"en",
"dataset:youngp5/tumors",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-20T17:14:27Z | ---
license: mit
datasets:
- youngp5/tumors
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- medical
--- |
aidn/squadBert3Epochs | aidn | 2023-06-27T11:39:42Z | 63 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-27T10:47:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aidn/squadBert3Epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aidn/squadBert3Epochs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8730
- Validation Loss: 1.1031
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8758, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5485 | 1.1485 | 0 |
| 0.9929 | 1.1031 | 1 |
| 0.8730 | 1.1031 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Olehf/Re | Olehf | 2023-06-27T11:36:04Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-27T11:36:04Z | ---
license: bigscience-openrail-m
---
|
Anmol0130/bottle_detection_june | Anmol0130 | 2023-06-27T11:25:56Z | 189 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T11:25:49Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: bottle_detection_june
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.84375
---
# bottle_detection_june
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Dewar's_12_Years

#### Dewar's_white_lable

#### bacardi_black

#### bacardi_carta_blanca

#### bacardi_carta_negra

#### bacardi_carta_oro

#### bombay_sapphire

#### coka_cola

#### martini
 |
ahishamm/vit-huge-PH2-patch-14 | ahishamm | 2023-06-27T11:21:19Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T11:18:25Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-PH2-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-PH2-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385
- Accuracy: 0.875
- Recall: 0.875
- F1: 0.875
- Precision: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-PH2-patch-16 | ahishamm | 2023-06-27T11:16:04Z | 190 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T11:14:23Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5830
- Accuracy: 0.85
- Recall: 0.85
- F1: 0.85
- Precision: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-PH2-patch-32 | ahishamm | 2023-06-27T11:14:06Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T11:13:12Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-PH2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3667
- Accuracy: 0.875
- Recall: 0.875
- F1: 0.875
- Precision: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-PH2-sharpened-patch-32 | ahishamm | 2023-06-27T10:54:39Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T10:51:42Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-sharpened-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0309
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
antoninobrillante/gtl-elephant-test2 | antoninobrillante | 2023-06-27T10:53:16Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-27T10:41:22Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### gtl-elephant-test2 Dreambooth model trained by antoninobrillante with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
maidh/poca-SoccerTwos | maidh | 2023-06-27T10:53:06Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-06-27T10:52:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: maidh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TurkuNLP/bloom-finnish-176b | TurkuNLP | 2023-06-27T10:52:36Z | 44 | 6 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"arxiv:2303.03915",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-12T11:52:56Z | ---
license: bigscience-bloom-rail-1.0
---
Multilingual Generative Pretrained Transformer with 176B parameters with capacity for Finnish.
This model is built upon pretrained [BLOOM](https://huggingface.co/bigscience/bloom) which is then further pretrained with a combined ROOTS + Finnish (without weighting) dataset for 40B tokens.
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* [ROOTS](https://arxiv.org/abs/2303.03915) - original BLOOM training corpus
**Sampling ratios for Finnish**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
And for whole continued pretraining, ROOTS is mixed in.
|
ahishamm/vit-base-PH2-sharpened-patch-32 | ahishamm | 2023-06-27T10:48:02Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T10:46:42Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-PH2-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-sharpened-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
michaelfeil/ct2fast-mpt-7b-instruct | michaelfeil | 2023-06-27T10:34:39Z | 6 | 0 | transformers | [
"transformers",
"mpt",
"text-generation",
"ctranslate2",
"int8",
"float16",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-05-30T07:19:35Z | ---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- ctranslate2
- int8
- float16
- Composer
- MosaicML
- llm-foundry
inference: false
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-7b-instruct"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-27 using
```
ct2-transformers-converter --model mosaicml/mpt-7b-instruct --output_dir ~/tmp-ct2fast-mpt-7b-instruct --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json requirements.txt .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
Intel/deberta-v3-base-mrpc-int8-dynamic-inc | Intel | 2023-06-27T10:32:10Z | 6 | 0 | transformers | [
"transformers",
"onnx",
"deberta-v2",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"neural-compressor",
"PostTrainingDynamic",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-25T07:39:02Z | ---
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- neural-compressor
- PostTrainingDynamic
- onnx
datasets:
- glue
metrics:
- f1
---
# INT8 deberta-v3-base-mrpc
## Post-training Dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/deberta-v3-base-mrpc](https://huggingface.co/Intel/deberta-v3-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9239|0.9223|
| **Model size (MB)** |350|705|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/deberta-v3-base-mrpc-int8-dynamic')
``` |
Pavlovvyache/video | Pavlovvyache | 2023-06-27T10:28:30Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-27T10:26:28Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Intel/xlm-roberta-base-mrpc-int8-dynamic-inc | Intel | 2023-06-27T10:01:34Z | 5 | 0 | transformers | [
"transformers",
"onnx",
"xlm-roberta",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingDynamic",
"en",
"dataset:mrpc",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-28T07:33:55Z | ---
language: en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
- onnx
datasets:
- mrpc
metrics:
- f1
---
# INT8 xlm-roberta base finetuned MRPC
## Post-training dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/xlm-roberta-base-mrpc](https://huggingface.co/Intel/xlm-roberta-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8966|0.9010|
| **Model size (MB)** |354|1061|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/xlm-roberta-base-mrpc-int8-dynamic')
```
|
michaelfeil/ct2fast-open-llama-13b-open-instruct | michaelfeil | 2023-06-27T09:55:36Z | 6 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"ctranslate2",
"int8",
"float16",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T08:56:08Z | ---
tags:
- ctranslate2
- int8
- float16
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-open-llama-13b-open-instruct"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-27 using
```
ct2-transformers-converter --model VMware/open-llama-13b-open-instruct --output_dir ~/tmp-ct2fast-open-llama-13b-open-instruct --force --copy_files README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
<b> NOTE </b> : The model was trained using the Alpaca prompt template \
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer\
<b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
## License
- <b>Commercially Viable </b>
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-13b-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
```
## Finetuning details
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
## Evaluation
<B>TODO</B> |
CAiRE/SER-wav2vec2-large-xlsr-53-eng-zho-all-age | CAiRE | 2023-06-27T09:52:35Z | 551 | 4 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"speech-emotion-recognition",
"audio-classification",
"en",
"zh",
"dataset:Ar4ikov/iemocap_audio_text_splitted",
"arxiv:2306.14517",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-27T09:09:18Z | ---
license: cc-by-sa-4.0
datasets:
- Ar4ikov/iemocap_audio_text_splitted
language:
- en
- zh
metrics:
- f1
library_name: transformers
pipeline_tag: audio-classification
tags:
- speech-emotion-recognition
---
# Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English and Chinese data from all-age speakers.
The model is trained on the training sets of [CREMA-D](https://github.com/CheyneyComputerScience/CREMA-D), [CSED](https://github.com/AkishinoShiame/Chinese-Speech-Emotion-Datasets), [ElderReact](https://github.com/Mayer123/ElderReact), [ESD](https://github.com/HLTSingapore/Emotional-Speech-Data), [IEMOCAP](https://sail.usc.edu/iemocap/iemocap_release.htm), and [TESS](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess).
When using this model, make sure that your speech input is sampled at 16kHz.
The scripts used for training and evaluation can be found here:
[https://github.com/HLTCHKUST/elderly_ser/tree/main](https://github.com/HLTCHKUST/elderly_ser/tree/main)
## Evaluation Results
For the details (e.g., the statistics of `train`, `valid`, and `test` data), please refer to our paper on [arXiv](https://arxiv.org/abs/2306.14517).
It also provides the model's speech emotion recognition performances on: English-All, Chinese-All, English-Elderly, Chinese-Elderly, English-Adults, Chinese-Adults.
## Citation
Our paper will be published at INTERSPEECH 2023. In the meantime, you can find our paper on [arXiv](https://arxiv.org/abs/2306.14517).
If you find our work useful, please consider citing our paper as follows:
```
@misc{cahyawijaya2023crosslingual,
title={Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition},
author={Samuel Cahyawijaya and Holy Lovenia and Willy Chung and Rita Frieske and Zihan Liu and Pascale Fung},
year={2023},
eprint={2306.14517},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
michaelfeil/ct2fast-starcoder | michaelfeil | 2023-06-27T09:50:37Z | 22 | 13 | transformers | [
"transformers",
"gpt_bigcode",
"text-generation",
"ctranslate2",
"int8",
"float16",
"code",
"dataset:bigcode/the-stack-dedup",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T00:18:05Z | ---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
tags:
- ctranslate2
- int8
- float16
- code
model-index:
- name: StarCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval (Prompted)
metrics:
- name: pass@1
type: pass@1
value: 0.408
verified: false
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.336
verified: false
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 0.527
verified: false
- task:
type: text-generation
dataset:
type: ds1000
name: DS-1000 (Overall Completion)
metrics:
- name: pass@1
type: pass@1
value: 0.26
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 0.3155
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C#)
metrics:
- name: pass@1
type: pass@1
value: 0.2101
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (D)
metrics:
- name: pass@1
type: pass@1
value: 0.1357
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Go)
metrics:
- name: pass@1
type: pass@1
value: 0.1761
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.3022
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Julia)
metrics:
- name: pass@1
type: pass@1
value: 0.2302
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 0.3079
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Lua)
metrics:
- name: pass@1
type: pass@1
value: 0.2389
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 0.2608
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Perl)
metrics:
- name: pass@1
type: pass@1
value: 0.1734
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.3357
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (R)
metrics:
- name: pass@1
type: pass@1
value: 0.155
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Ruby)
metrics:
- name: pass@1
type: pass@1
value: 0.0124
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Racket)
metrics:
- name: pass@1
type: pass@1
value: 0.0007
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 0.2184
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Scala)
metrics:
- name: pass@1
type: pass@1
value: 0.2761
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Bash)
metrics:
- name: pass@1
type: pass@1
value: 0.1046
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Swift)
metrics:
- name: pass@1
type: pass@1
value: 0.2274
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (TypeScript)
metrics:
- name: pass@1
type: pass@1
value: 0.3229
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-starcoder"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-27 using
```
ct2-transformers-converter --model bigcode/starcoder --output_dir ~/tmp-ct2fast-starcoder --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# StarCoder

Play with the model on the [StarCoder Playground](https://huggingface.co/spaces/bigcode/bigcode-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
The StarCoder models are 15.5B parameter models trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** 80+ Programming languages
## Use
### Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 250k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 24 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
hongrui/mammogram_v_2_2 | hongrui | 2023-06-27T09:48:52Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-26T22:46:35Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
SHENMU007/neunit_BASE_V10.8 | SHENMU007 | 2023-06-27T09:44:23Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-06-27T06:43:45Z | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
arildgrimstveit/vicuna7b | arildgrimstveit | 2023-06-27T09:43:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-27T08:36:21Z | ---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
If you still want to use this old version, please see the compatibility and difference between different versions [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
**NOTE: This "delta model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0).
<br>
<br>
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://lmsys.org/blog/2023-03-30-vicuna/
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs.
See https://lmsys.org/blog/2023-03-30-vicuna/ for more details.
|
apparaomulpuri/alpaca-HJ-model | apparaomulpuri | 2023-06-27T09:39:37Z | 6 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-27T05:12:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
hw2942/bert-base-chinese-finetuning-wallstreetcn-morning-news-vix-sz50-v1 | hw2942 | 2023-06-27T09:03:45Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-27T02:57:50Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuning-wallstreetcn-morning-news-vix-sz50-v1
results: []
language:
- zh
widget:
- text:A股创业板六年新高;纳指跌落高位,标普又新高,创史上第二大中概IPO和今年美股最大IPO的滴滴首日冲高回落,市值破800亿美元,叮咚买菜次日涨逾60%;美元逾两月新高,金银铜6月大跌,原油半年涨超50%。\n中国6月官方制造业PMI为50.9,价格指数从高位回落。\n央行等六部门:充分发挥信贷等金融子市场合力,增强政策的针对性和可操作性。\n人社部 “十四五” 发展规划要求,基本养老保险参保率达95%,城镇新增就业逾5000万人。\n沪深交所7月19日起下调基金交易经手费收费标准。\n奈雪的茶赴港上市首日破发,收盘大跌14%,市值跌破300亿港元。\n港股上市倒计时,小鹏汽车定价165港元/股。\n格力2020股东会通过员工持股计划等议案,董明珠称接班人不是我说你行就行,是你能行才行。\n美国6月小非农ADP新增就业高于预期,绝对值较5月有所回落。\n美联储逆回购用量史上首次逼近1万亿美元。\n媒体称拜登最早下周颁布新行政令,限制多个行业的寡头垄断。\n亚马逊称FTC新任主席有偏见,寻求其回避反垄断调查。\n散户最爱平台Robinhood遭FINRA创纪录罚款7000万美元,被指坑害百万客户。
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuning-wallstreetcn-morning-news-vix-sz50-v1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0050
- Accuracy: 0.6538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 19 | 0.6986 | 0.5 |
| No log | 2.0 | 38 | 0.6988 | 0.5 |
| No log | 3.0 | 57 | 0.7804 | 0.5 |
| No log | 4.0 | 76 | 0.6912 | 0.5 |
| No log | 5.0 | 95 | 0.8595 | 0.5192 |
| No log | 6.0 | 114 | 0.7574 | 0.5962 |
| No log | 7.0 | 133 | 1.6235 | 0.6154 |
| No log | 8.0 | 152 | 1.2308 | 0.6346 |
| No log | 9.0 | 171 | 1.1341 | 0.6923 |
| No log | 10.0 | 190 | 1.0050 | 0.6538 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
Rryay12/ppo-SnowballTarget | Rryay12 | 2023-06-27T08:58:22Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-06-27T08:48:38Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rryay12/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SHENMU007/neunit-changchun-20230626V2 | SHENMU007 | 2023-06-27T08:57:33Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-27T05:55:50Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: neunit-changchun-20230626V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neunit-changchun-20230626V2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0047 | 1.0 | 3303 | 0.0019 | 0.9997 |
| 0.0029 | 2.0 | 6606 | 0.0010 | 0.9996 |
| 0.0044 | 3.0 | 9909 | 0.0003 | 0.9999 |
| 0.0006 | 4.0 | 13212 | 0.0000 | 1.0 |
| 0.0 | 5.0 | 16515 | 0.0001 | 1.0000 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
berluk/cow-detection | berluk | 2023-06-27T08:51:37Z | 4 | 0 | tf-keras | [
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-06-16T13:05:21Z | ---
pipeline_tag: image-classification
--- |
Broonion/RLcourse-unit1-ppo-LunarLander-v2 | Broonion | 2023-06-27T08:46:01Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T08:45:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 295.76 +/- 14.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Intel/albert-base-v2-sst2-int8-dynamic-inc | Intel | 2023-06-27T08:45:08Z | 5 | 0 | transformers | [
"transformers",
"onnx",
"albert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingDynamic",
"en",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-28T09:10:38Z | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
- onnx
datasets:
- glue
metrics:
- f1
---
# INT8 albert-base-v2-sst2
## Post-training dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Alireza1044/albert-base-v2-sst2](https://huggingface.co/Alireza1044/albert-base-v2-sst2).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-accuracy)** |0.9186|0.9232|
| **Model size (MB)** |59|45|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/albert-base-v2-sst2-int8-dynamic')
```
|
joohwan/chanhyuk-gd | joohwan | 2023-06-27T08:40:39Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-27T07:12:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: chanhyuk-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chanhyuk-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0837
- Wer: 9.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.246 | 0.18 | 500 | 0.2557 | 24.6951 |
| 0.1363 | 0.36 | 1000 | 0.1898 | 18.1750 |
| 0.094 | 0.54 | 1500 | 0.1450 | 14.4255 |
| 0.0842 | 0.72 | 2000 | 0.1100 | 15.4495 |
| 0.0595 | 0.9 | 2500 | 0.0916 | 10.6008 |
| 0.0141 | 1.08 | 3000 | 0.0837 | 9.9533 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits