modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 501
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 18:25:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5 | jordyvl | 2023-07-09T11:08:03Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T10:52:37Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t5.0_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8497
- Accuracy: 0.18
- Brier Loss: 0.8788
- Nll: 6.0432
- F1 Micro: 0.18
- F1 Macro: 0.0305
- Ece: 0.2578
- Aurc: 0.8511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 4.0678 | 0.145 | 0.8999 | 10.1608 | 0.145 | 0.0253 | 0.2221 | 0.8466 |
| No log | 1.96 | 6 | 4.0316 | 0.145 | 0.8948 | 10.5160 | 0.145 | 0.0253 | 0.2239 | 0.8468 |
| No log | 2.96 | 9 | 3.9774 | 0.16 | 0.8871 | 8.6333 | 0.16 | 0.0524 | 0.2217 | 0.8424 |
| No log | 3.96 | 12 | 3.9325 | 0.155 | 0.8813 | 6.5340 | 0.155 | 0.0272 | 0.2161 | 0.8837 |
| No log | 4.96 | 15 | 3.9041 | 0.155 | 0.8787 | 7.1704 | 0.155 | 0.0271 | 0.2296 | 0.8923 |
| No log | 5.96 | 18 | 3.8876 | 0.155 | 0.8782 | 8.7334 | 0.155 | 0.0277 | 0.2325 | 0.8942 |
| No log | 6.96 | 21 | 3.8766 | 0.18 | 0.8785 | 8.8120 | 0.18 | 0.0314 | 0.2476 | 0.8555 |
| No log | 7.96 | 24 | 3.8690 | 0.18 | 0.8791 | 8.8676 | 0.18 | 0.0308 | 0.2643 | 0.8534 |
| No log | 8.96 | 27 | 3.8633 | 0.18 | 0.8793 | 8.5299 | 0.18 | 0.0306 | 0.2594 | 0.8541 |
| No log | 9.96 | 30 | 3.8601 | 0.18 | 0.8796 | 7.4142 | 0.18 | 0.0305 | 0.2622 | 0.8548 |
| No log | 10.96 | 33 | 3.8577 | 0.18 | 0.8797 | 6.6642 | 0.18 | 0.0305 | 0.2720 | 0.8546 |
| No log | 11.96 | 36 | 3.8560 | 0.18 | 0.8797 | 6.2862 | 0.18 | 0.0305 | 0.2723 | 0.8543 |
| No log | 12.96 | 39 | 3.8547 | 0.18 | 0.8796 | 6.2084 | 0.18 | 0.0305 | 0.2678 | 0.8541 |
| No log | 13.96 | 42 | 3.8535 | 0.18 | 0.8794 | 6.1826 | 0.18 | 0.0305 | 0.2631 | 0.8534 |
| No log | 14.96 | 45 | 3.8525 | 0.18 | 0.8793 | 6.1744 | 0.18 | 0.0305 | 0.2593 | 0.8529 |
| No log | 15.96 | 48 | 3.8516 | 0.18 | 0.8792 | 6.1606 | 0.18 | 0.0305 | 0.2680 | 0.8527 |
| No log | 16.96 | 51 | 3.8511 | 0.18 | 0.8791 | 6.1634 | 0.18 | 0.0305 | 0.2724 | 0.8528 |
| No log | 17.96 | 54 | 3.8510 | 0.18 | 0.8791 | 6.0971 | 0.18 | 0.0305 | 0.2676 | 0.8525 |
| No log | 18.96 | 57 | 3.8508 | 0.18 | 0.8790 | 6.0686 | 0.18 | 0.0305 | 0.2630 | 0.8522 |
| No log | 19.96 | 60 | 3.8503 | 0.18 | 0.8789 | 6.0495 | 0.18 | 0.0305 | 0.2581 | 0.8518 |
| No log | 20.96 | 63 | 3.8501 | 0.18 | 0.8789 | 6.0918 | 0.18 | 0.0305 | 0.2581 | 0.8516 |
| No log | 21.96 | 66 | 3.8499 | 0.18 | 0.8788 | 6.0464 | 0.18 | 0.0305 | 0.2536 | 0.8516 |
| No log | 22.96 | 69 | 3.8497 | 0.18 | 0.8788 | 6.0419 | 0.18 | 0.0305 | 0.2535 | 0.8513 |
| No log | 23.96 | 72 | 3.8497 | 0.18 | 0.8788 | 6.0432 | 0.18 | 0.0305 | 0.2578 | 0.8511 |
| No log | 24.96 | 75 | 3.8497 | 0.18 | 0.8788 | 6.0432 | 0.18 | 0.0305 | 0.2578 | 0.8511 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dlowl/dolly-v2-12b-endpoint | dlowl | 2023-07-09T10:52:57Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T10:42:09Z | ---
license: mit
language:
- en
library_name: transformers
inference: false
datasets:
- databricks/databricks-dolly-15k
duplicated_from: databricks/dolly-v2-12b
---
# dolly-v2-12b Model Card
## Summary
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these smaller models sizes:
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
but a robust statement as to the sources of these variations requires further study.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking! |
hegelty/KcBERT-Large-finetuned-josa | hegelty | 2023-07-09T10:43:46Z | 70 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T16:53:29Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: hegelty/KcBERT-Large-finetuned-josa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hegelty/KcBERT-Large-finetuned-josa
This model is a fine-tuned version of [beomi/KcBERT-Large](https://huggingface.co/beomi/KcBERT-Large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0058
- Validation Loss: 0.0000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 59393, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0058 | 0.0000 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.9.2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-real | hafidikhsan | 2023-07-09T10:38:14Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-09T10:37:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-real
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-real
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0733
- Accuracy: 0.684
- F1: 0.6768
- Precision: 0.6727
- Recall: 0.684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.939 | 1.0 | 313 | 0.9081 | 0.6268 | 0.5698 | 0.6363 | 0.6268 |
| 0.83 | 2.0 | 626 | 0.7514 | 0.664 | 0.6410 | 0.6418 | 0.664 |
| 0.6184 | 3.0 | 939 | 0.8578 | 0.6484 | 0.6502 | 0.6529 | 0.6484 |
| 0.1805 | 4.0 | 1252 | 1.0733 | 0.684 | 0.6768 | 0.6727 | 0.684 |
| 0.3776 | 5.0 | 1565 | 1.3549 | 0.6672 | 0.6646 | 0.6630 | 0.6672 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_v2_13b-GGML | TheBloke | 2023-07-09T10:28:34Z | 0 | 24 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:psmathur/orca_minis_uncensored_dataset",
"arxiv:2306.02707",
"arxiv:2302.13971",
"arxiv:2304.12244",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-generation | 2023-07-09T10:07:58Z | ---
inference: false
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- psmathur/orca_minis_uncensored_dataset
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini v2 13B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini v2 13B](https://huggingface.co/psmathur/orca_mini_v2_13b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v2_13b)
## Prompt template: orca_mini
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input, if required
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca_mini_v2_13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| orca_mini_v2_13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca_mini_v2_13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca_mini_v2_13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| orca_mini_v2_13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| orca_mini_v2_13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca_mini_v2_13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| orca_mini_v2_13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| orca_mini_v2_13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca_mini_v2_13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca_mini_v2_13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| orca_mini_v2_13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca_mini_v2_13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini v2 13B
# orca_mini_v2_13b
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
**P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam**
# Evaluation
I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:-------------:|:---------:|
|**Task**|**Value**|**Stderr**|
|*arc_challenge*|0.5572|0.0145|
|*hellaswag*|0.7964|0.0040|
|*mmlu*|0.4969|0.035|
|*truthfulqa_mc*|0.5231|0.0158|
|*Total Average*|0.5933|0.0114|
# Dataset
We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features).
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|48|
|*train_micro_batch_size_per_gpu*|3|
|*gradient_accumulation_steps*|4|
|*Learning rate*|2e-5|
|*Max length*|2048|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui)
```
### System:
{system}
### User:
{instruction}
### Input:
{input}
### Response:
```
Here is sample example:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me how to break into my own car
### Input:
### Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Below shows a code example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_v2_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Tell me how to break into my own car'
print(generate_text(system, instruction))
```
**NOTE: The real response is hidden here with ^^^^^^^^^^^^^.**
```
[!] Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{orca_mini_v2_13b,
author = {Pankaj Mathur},
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b},
}
```
```
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jordyvl/dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone | jordyvl | 2023-07-09T10:18:24Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T09:28:41Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_rvl_cdip_100_examples_per_class_simkd_CEKD_t1_aNone
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1502
- Accuracy: 0.0625
- Brier Loss: 0.9374
- Nll: 9.1398
- F1 Micro: 0.0625
- F1 Macro: 0.0074
- Ece: 0.1015
- Aurc: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 0.1540 | 0.0625 | 0.9376 | 8.5438 | 0.0625 | 0.0074 | 0.1043 | 0.9530 |
| No log | 1.96 | 24 | 0.1519 | 0.0625 | 0.9376 | 8.2831 | 0.0625 | 0.0074 | 0.1008 | 0.9465 |
| No log | 2.96 | 36 | 0.1512 | 0.0625 | 0.9375 | 8.4629 | 0.0625 | 0.0074 | 0.1028 | 0.9336 |
| No log | 3.96 | 48 | 0.1510 | 0.0625 | 0.9375 | 8.6283 | 0.0625 | 0.0074 | 0.1027 | 0.9365 |
| No log | 4.96 | 60 | 0.1509 | 0.0625 | 0.9375 | 8.5065 | 0.0625 | 0.0074 | 0.1030 | 0.9433 |
| No log | 5.96 | 72 | 0.1508 | 0.0625 | 0.9375 | 8.4779 | 0.0625 | 0.0074 | 0.1017 | 0.9414 |
| No log | 6.96 | 84 | 0.1507 | 0.0625 | 0.9375 | 8.5053 | 0.0625 | 0.0074 | 0.1045 | 0.9438 |
| No log | 7.96 | 96 | 0.1507 | 0.0625 | 0.9375 | 8.7396 | 0.0625 | 0.0074 | 0.1032 | 0.9440 |
| No log | 8.96 | 108 | 0.1506 | 0.0625 | 0.9375 | 8.6420 | 0.0625 | 0.0074 | 0.1031 | 0.9448 |
| No log | 9.96 | 120 | 0.1506 | 0.0625 | 0.9375 | 8.8410 | 0.0625 | 0.0074 | 0.1045 | 0.9438 |
| No log | 10.96 | 132 | 0.1506 | 0.0625 | 0.9374 | 8.9438 | 0.0625 | 0.0074 | 0.1042 | 0.9413 |
| No log | 11.96 | 144 | 0.1505 | 0.0625 | 0.9374 | 8.9847 | 0.0625 | 0.0074 | 0.1032 | 0.9418 |
| No log | 12.96 | 156 | 0.1505 | 0.0625 | 0.9374 | 9.0594 | 0.0625 | 0.0074 | 0.1031 | 0.9397 |
| No log | 13.96 | 168 | 0.1504 | 0.0625 | 0.9374 | 9.0748 | 0.0625 | 0.0074 | 0.1045 | 0.9343 |
| No log | 14.96 | 180 | 0.1504 | 0.0625 | 0.9374 | 9.0912 | 0.0625 | 0.0074 | 0.1018 | 0.9358 |
| No log | 15.96 | 192 | 0.1504 | 0.0625 | 0.9374 | 9.0950 | 0.0625 | 0.0074 | 0.1032 | 0.9331 |
| No log | 16.96 | 204 | 0.1503 | 0.0625 | 0.9374 | 9.2141 | 0.0625 | 0.0074 | 0.1015 | 0.9363 |
| No log | 17.96 | 216 | 0.1503 | 0.0625 | 0.9374 | 9.0918 | 0.0625 | 0.0074 | 0.1046 | 0.9354 |
| No log | 18.96 | 228 | 0.1503 | 0.0625 | 0.9374 | 9.1430 | 0.0625 | 0.0074 | 0.1018 | 0.9385 |
| No log | 19.96 | 240 | 0.1503 | 0.0625 | 0.9374 | 9.2149 | 0.0625 | 0.0074 | 0.0991 | 0.9404 |
| No log | 20.96 | 252 | 0.1503 | 0.0625 | 0.9374 | 9.0900 | 0.0625 | 0.0074 | 0.1043 | 0.9386 |
| No log | 21.96 | 264 | 0.1503 | 0.0625 | 0.9374 | 9.1244 | 0.0625 | 0.0074 | 0.1060 | 0.9395 |
| No log | 22.96 | 276 | 0.1503 | 0.0625 | 0.9374 | 9.1353 | 0.0625 | 0.0074 | 0.1005 | 0.9378 |
| No log | 23.96 | 288 | 0.1502 | 0.0625 | 0.9374 | 9.2063 | 0.0625 | 0.0074 | 0.1032 | 0.9373 |
| No log | 24.96 | 300 | 0.1502 | 0.0625 | 0.9374 | 9.1398 | 0.0625 | 0.0074 | 0.1015 | 0.9383 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KJan05/ppo-CartPole-v1-unit8-p1 | KJan05 | 2023-07-09T10:09:08Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T08:36:34Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -80.21 +/- 69.99
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'KJan05/ppo-CartPole-v1-unit8-p1'
'batch_size': 512
'minibatch_size': 128}
```
|
DovahYol/Reinforce-Pixelcopter-PLE-v0 | DovahYol | 2023-07-09T10:04:12Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T10:04:05Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 65.90 +/- 39.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cgcgcgcgcg/111 | cgcgcgcgcg | 2023-07-09T09:32:21Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-09T09:31:54Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jvvelzen/dqn-SpaceInvadersNoFrameskip-v4 | jvvelzen | 2023-07-09T09:29:26Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T09:28:53Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 476.00 +/- 136.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jvvelzen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jvvelzen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jvvelzen
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/dit-small_tobacco3482_simkd_CEKD_t1_aNone | jordyvl | 2023-07-09T09:27:27Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-07T22:15:45Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_simkd_CEKD_t1_aNone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_simkd_CEKD_t1_aNone
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9876
- Accuracy: 0.085
- Brier Loss: 0.8927
- Nll: 8.3272
- F1 Micro: 0.085
- F1 Macro: 0.0461
- Ece: 0.1645
- Aurc: 0.7988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 1.0049 | 0.08 | 0.8993 | 5.4663 | 0.08 | 0.0322 | 0.1476 | 0.8883 |
| No log | 1.96 | 24 | 1.0007 | 0.165 | 0.8988 | 5.5926 | 0.165 | 0.0284 | 0.2066 | 0.8251 |
| No log | 2.96 | 36 | 0.9994 | 0.16 | 0.8982 | 5.9135 | 0.16 | 0.0277 | 0.2100 | 0.8518 |
| No log | 3.96 | 48 | 0.9984 | 0.17 | 0.8975 | 6.1195 | 0.17 | 0.0574 | 0.2142 | 0.8153 |
| No log | 4.96 | 60 | 0.9976 | 0.19 | 0.8970 | 6.2724 | 0.19 | 0.0752 | 0.2294 | 0.8254 |
| No log | 5.96 | 72 | 0.9967 | 0.09 | 0.8968 | 6.3787 | 0.09 | 0.0315 | 0.1591 | 0.7950 |
| No log | 6.96 | 84 | 0.9958 | 0.065 | 0.8964 | 6.4218 | 0.065 | 0.0122 | 0.1433 | 0.8333 |
| No log | 7.96 | 96 | 0.9949 | 0.065 | 0.8960 | 6.5170 | 0.065 | 0.0122 | 0.1543 | 0.8344 |
| No log | 8.96 | 108 | 0.9941 | 0.065 | 0.8956 | 6.5572 | 0.065 | 0.0123 | 0.1545 | 0.8331 |
| No log | 9.96 | 120 | 0.9934 | 0.07 | 0.8954 | 6.6362 | 0.07 | 0.0304 | 0.1597 | 0.8313 |
| No log | 10.96 | 132 | 0.9926 | 0.07 | 0.8951 | 6.6430 | 0.07 | 0.0304 | 0.1576 | 0.8325 |
| No log | 11.96 | 144 | 0.9920 | 0.07 | 0.8948 | 6.6842 | 0.07 | 0.0304 | 0.1590 | 0.8225 |
| No log | 12.96 | 156 | 0.9914 | 0.07 | 0.8947 | 6.7731 | 0.07 | 0.0304 | 0.1619 | 0.8155 |
| No log | 13.96 | 168 | 0.9909 | 0.07 | 0.8944 | 6.8584 | 0.07 | 0.0304 | 0.1522 | 0.8128 |
| No log | 14.96 | 180 | 0.9904 | 0.07 | 0.8941 | 6.8161 | 0.07 | 0.0304 | 0.1524 | 0.8142 |
| No log | 15.96 | 192 | 0.9899 | 0.07 | 0.8940 | 7.3169 | 0.07 | 0.0304 | 0.1532 | 0.8109 |
| No log | 16.96 | 204 | 0.9894 | 0.07 | 0.8937 | 7.8481 | 0.07 | 0.0304 | 0.1531 | 0.8132 |
| No log | 17.96 | 216 | 0.9890 | 0.08 | 0.8935 | 8.3375 | 0.08 | 0.0439 | 0.1587 | 0.8002 |
| No log | 18.96 | 228 | 0.9886 | 0.07 | 0.8933 | 8.4250 | 0.07 | 0.0307 | 0.1536 | 0.8132 |
| No log | 19.96 | 240 | 0.9883 | 0.085 | 0.8931 | 8.4316 | 0.085 | 0.0445 | 0.1618 | 0.8014 |
| No log | 20.96 | 252 | 0.9880 | 0.075 | 0.8930 | 8.4395 | 0.075 | 0.0392 | 0.1566 | 0.8088 |
| No log | 21.96 | 264 | 0.9878 | 0.085 | 0.8929 | 8.3319 | 0.085 | 0.0476 | 0.1621 | 0.7956 |
| No log | 22.96 | 276 | 0.9877 | 0.08 | 0.8928 | 8.3274 | 0.08 | 0.0439 | 0.1594 | 0.8024 |
| No log | 23.96 | 288 | 0.9876 | 0.08 | 0.8927 | 8.3285 | 0.08 | 0.0440 | 0.1595 | 0.8014 |
| No log | 24.96 | 300 | 0.9876 | 0.085 | 0.8927 | 8.3272 | 0.085 | 0.0461 | 0.1645 | 0.7988 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cagarraz/ppo-PyramidsRND | cagarraz | 2023-07-09T09:23:32Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-09T09:15:22Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cagarraz/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
crisU8/bert-finetuned-ner-clinical-BETO-1-uncased | crisU8 | 2023-07-09T09:19:58Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T09:06:55Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-1-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-1-uncased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5376
- Precision: 0.7341
- Recall: 0.7772
- F1: 0.7550
- Accuracy: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4682 | 1.0 | 502 | 0.3263 | 0.6124 | 0.7344 | 0.6678 | 0.8939 |
| 0.2443 | 2.0 | 1004 | 0.2778 | 0.6809 | 0.7519 | 0.7147 | 0.9122 |
| 0.1728 | 3.0 | 1506 | 0.2898 | 0.7011 | 0.7481 | 0.7238 | 0.9155 |
| 0.1277 | 4.0 | 2008 | 0.3182 | 0.6970 | 0.7640 | 0.7290 | 0.9118 |
| 0.0928 | 5.0 | 2510 | 0.3578 | 0.6975 | 0.7667 | 0.7305 | 0.9128 |
| 0.0699 | 6.0 | 3012 | 0.3931 | 0.7058 | 0.7794 | 0.7407 | 0.9102 |
| 0.0538 | 7.0 | 3514 | 0.4213 | 0.7225 | 0.7574 | 0.7395 | 0.9140 |
| 0.0413 | 8.0 | 4016 | 0.4387 | 0.7143 | 0.7821 | 0.7467 | 0.9147 |
| 0.033 | 9.0 | 4518 | 0.4997 | 0.7184 | 0.7728 | 0.7446 | 0.9147 |
| 0.0265 | 10.0 | 5020 | 0.5056 | 0.7180 | 0.7728 | 0.7444 | 0.9152 |
| 0.0225 | 11.0 | 5522 | 0.5237 | 0.7250 | 0.7728 | 0.7481 | 0.9164 |
| 0.0176 | 12.0 | 6024 | 0.5376 | 0.7341 | 0.7772 | 0.7550 | 0.9177 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 | jordyvl | 2023-07-09T09:01:17Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T08:49:44Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3286
- Accuracy: 0.18
- Brier Loss: 0.8742
- Nll: 6.7213
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2558
- Aurc: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.4683 | 0.145 | 0.8999 | 10.1538 | 0.145 | 0.0253 | 0.2220 | 0.8466 |
| No log | 1.96 | 6 | 2.4396 | 0.145 | 0.8947 | 10.5704 | 0.145 | 0.0253 | 0.2237 | 0.8463 |
| No log | 2.96 | 9 | 2.3985 | 0.145 | 0.8869 | 8.5511 | 0.145 | 0.0451 | 0.2116 | 0.8036 |
| No log | 3.96 | 12 | 2.3677 | 0.21 | 0.8810 | 6.5446 | 0.2100 | 0.0611 | 0.2566 | 0.8335 |
| No log | 4.96 | 15 | 2.3517 | 0.155 | 0.8780 | 6.8400 | 0.155 | 0.0279 | 0.2309 | 0.8894 |
| No log | 5.96 | 18 | 2.3450 | 0.18 | 0.8771 | 8.1897 | 0.18 | 0.0313 | 0.2495 | 0.8531 |
| No log | 6.96 | 21 | 2.3407 | 0.18 | 0.8767 | 7.3073 | 0.18 | 0.0306 | 0.2551 | 0.8513 |
| No log | 7.96 | 24 | 2.3371 | 0.18 | 0.8763 | 6.9328 | 0.18 | 0.0306 | 0.2501 | 0.8520 |
| No log | 8.96 | 27 | 2.3337 | 0.18 | 0.8757 | 6.8828 | 0.18 | 0.0306 | 0.2507 | 0.8525 |
| No log | 9.96 | 30 | 2.3321 | 0.18 | 0.8753 | 6.8682 | 0.18 | 0.0306 | 0.2508 | 0.8524 |
| No log | 10.96 | 33 | 2.3312 | 0.18 | 0.8751 | 6.7981 | 0.18 | 0.0306 | 0.2462 | 0.8521 |
| No log | 11.96 | 36 | 2.3309 | 0.18 | 0.8749 | 6.7375 | 0.18 | 0.0306 | 0.2531 | 0.8520 |
| No log | 12.96 | 39 | 2.3307 | 0.18 | 0.8748 | 6.7235 | 0.18 | 0.0306 | 0.2524 | 0.8518 |
| No log | 13.96 | 42 | 2.3304 | 0.18 | 0.8747 | 6.7200 | 0.18 | 0.0306 | 0.2482 | 0.8514 |
| No log | 14.96 | 45 | 2.3301 | 0.18 | 0.8746 | 6.7201 | 0.18 | 0.0306 | 0.2410 | 0.8509 |
| No log | 15.96 | 48 | 2.3298 | 0.18 | 0.8746 | 6.7182 | 0.18 | 0.0306 | 0.2449 | 0.8505 |
| No log | 16.96 | 51 | 2.3295 | 0.18 | 0.8745 | 6.7211 | 0.18 | 0.0306 | 0.2412 | 0.8500 |
| No log | 17.96 | 54 | 2.3297 | 0.18 | 0.8745 | 6.7201 | 0.18 | 0.0306 | 0.2449 | 0.8496 |
| No log | 18.96 | 57 | 2.3296 | 0.18 | 0.8745 | 6.7216 | 0.18 | 0.0306 | 0.2392 | 0.8494 |
| No log | 19.96 | 60 | 2.3292 | 0.18 | 0.8744 | 6.7214 | 0.18 | 0.0306 | 0.2371 | 0.8494 |
| No log | 20.96 | 63 | 2.3290 | 0.18 | 0.8744 | 6.7222 | 0.18 | 0.0306 | 0.2371 | 0.8493 |
| No log | 21.96 | 66 | 2.3288 | 0.18 | 0.8743 | 6.7227 | 0.18 | 0.0306 | 0.2408 | 0.8494 |
| No log | 22.96 | 69 | 2.3286 | 0.18 | 0.8743 | 6.7223 | 0.18 | 0.0306 | 0.2558 | 0.8490 |
| No log | 23.96 | 72 | 2.3286 | 0.18 | 0.8743 | 6.7218 | 0.18 | 0.0306 | 0.2558 | 0.8491 |
| No log | 24.96 | 75 | 2.3286 | 0.18 | 0.8742 | 6.7213 | 0.18 | 0.0306 | 0.2558 | 0.8491 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
crisU8/bert-finetuned-ner-clinical-BETO-uncased-4 | crisU8 | 2023-07-09T08:59:59Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T08:54:00Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-uncased-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-uncased-4
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Precision: 0.7142
- Recall: 0.7722
- F1: 0.7421
- Accuracy: 0.9150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0602 | 1.0 | 502 | 0.3957 | 0.7006 | 0.7552 | 0.7269 | 0.9089 |
| 0.0596 | 2.0 | 1004 | 0.3879 | 0.7198 | 0.7629 | 0.7407 | 0.9146 |
| 0.0575 | 3.0 | 1506 | 0.4171 | 0.7142 | 0.7722 | 0.7421 | 0.9150 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-BETO-uncased-1 | crisU8 | 2023-07-09T08:40:27Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-09T08:35:19Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-BETO-uncased-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-BETO-uncased-1
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3018
- Precision: 0.6953
- Recall: 0.7464
- F1: 0.7200
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4647 | 1.0 | 502 | 0.3156 | 0.6186 | 0.7327 | 0.6709 | 0.8969 |
| 0.2428 | 2.0 | 1004 | 0.2804 | 0.6916 | 0.7470 | 0.7182 | 0.9120 |
| 0.1734 | 3.0 | 1506 | 0.2864 | 0.6923 | 0.7508 | 0.7204 | 0.9161 |
| 0.1353 | 4.0 | 2008 | 0.3018 | 0.6953 | 0.7464 | 0.7200 | 0.9155 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cambioml/rlhf-reward-model | cambioml | 2023-07-09T08:36:38Z | 136 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T07:59:15Z | # 🚀 RLHF Step-2 Reward Model
This repository is home to a RLHF reward model. This model is trained on questions and answers from the Stack Overflow Data Dump (https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences), using the `distilroberta-base` model (https://huggingface.co/distilroberta-base) as a base.
## Usage
You can use this model directly with a pipeline for tasks such as text generation and instruction following:
```python
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
pipeline
)
reward_model = AutoModelForSequenceClassification.from_pretrained(
cambioml/rlhf_reward_model,
num_labels=1,
# torch_dtype=torch.bfloat16,
load_in_8bit=True,
device_map={"": Accelerator().process_index}
)
reward_tokenizer = AutoTokenizer.from_pretrained(cambioml/rlhf_reward_model)
reward_tokenizer.pad_token = reward_tokenizer.eos_token
reward_kwargs = {
"return_all_scores": True,
"function_to_apply": "none",
"batch_size": 32,
"truncation": True,
"max_length": 138
}
reward_pipe = pipeline(
"sentiment-analysis",
model=reward_model,
model_kwargs=reward_kwargs,
tokenizer=reward_tokenizer,
return_token_type_ids=False,
)
``` |
TrubnyaviyOrk/Ereshkigal | TrubnyaviyOrk | 2023-07-09T08:34:35Z | 0 | 0 | null | [
"rvc",
"audio-to-audio",
"ja",
"license:mit",
"region:us"
] | audio-to-audio | 2023-07-09T08:30:13Z | ---
license: mit
language:
- ja
pipeline_tag: audio-to-audio
tags:
- rvc
--- |
saintzeno/a2c-PandaReachDense-v2 | saintzeno | 2023-07-09T08:26:17Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T06:25:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.83 +/- 0.18
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jordyvl/dit-small_tobacco3482_kd_CEKD_t1.5_a0.5 | jordyvl | 2023-07-09T08:22:33Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T08:09:15Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8753
- Accuracy: 0.185
- Brier Loss: 0.8660
- Nll: 6.5533
- F1 Micro: 0.185
- F1 Macro: 0.0488
- Ece: 0.2451
- Aurc: 0.7363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 3.1378 | 0.06 | 0.9042 | 9.2898 | 0.06 | 0.0114 | 0.1754 | 0.9032 |
| No log | 1.96 | 6 | 3.0447 | 0.18 | 0.8884 | 6.2145 | 0.18 | 0.0305 | 0.2294 | 0.8048 |
| No log | 2.96 | 9 | 2.9500 | 0.18 | 0.8761 | 6.9445 | 0.18 | 0.0305 | 0.2447 | 0.8193 |
| No log | 3.96 | 12 | 2.9328 | 0.18 | 0.8800 | 6.9512 | 0.18 | 0.0305 | 0.2565 | 0.8122 |
| No log | 4.96 | 15 | 2.9305 | 0.185 | 0.8793 | 6.9136 | 0.185 | 0.0488 | 0.2557 | 0.7823 |
| No log | 5.96 | 18 | 2.9286 | 0.185 | 0.8762 | 6.7762 | 0.185 | 0.0488 | 0.2533 | 0.7721 |
| No log | 6.96 | 21 | 2.9265 | 0.185 | 0.8731 | 5.9902 | 0.185 | 0.0488 | 0.2345 | 0.7682 |
| No log | 7.96 | 24 | 2.9240 | 0.185 | 0.8718 | 5.9696 | 0.185 | 0.0488 | 0.2625 | 0.7621 |
| No log | 8.96 | 27 | 2.9177 | 0.185 | 0.8707 | 5.9711 | 0.185 | 0.0488 | 0.2463 | 0.7578 |
| No log | 9.96 | 30 | 2.9129 | 0.185 | 0.8702 | 6.6932 | 0.185 | 0.0488 | 0.2485 | 0.7574 |
| No log | 10.96 | 33 | 2.9082 | 0.185 | 0.8704 | 6.7772 | 0.185 | 0.0488 | 0.2500 | 0.7560 |
| No log | 11.96 | 36 | 2.9039 | 0.185 | 0.8707 | 6.8060 | 0.185 | 0.0488 | 0.2464 | 0.7537 |
| No log | 12.96 | 39 | 2.8990 | 0.185 | 0.8704 | 6.7988 | 0.185 | 0.0488 | 0.2466 | 0.7515 |
| No log | 13.96 | 42 | 2.8933 | 0.185 | 0.8696 | 6.7771 | 0.185 | 0.0488 | 0.2505 | 0.7479 |
| No log | 14.96 | 45 | 2.8879 | 0.185 | 0.8688 | 6.7597 | 0.185 | 0.0488 | 0.2523 | 0.7482 |
| No log | 15.96 | 48 | 2.8840 | 0.185 | 0.8679 | 6.6825 | 0.185 | 0.0488 | 0.2648 | 0.7454 |
| No log | 16.96 | 51 | 2.8822 | 0.185 | 0.8676 | 6.6742 | 0.185 | 0.0488 | 0.2473 | 0.7425 |
| No log | 17.96 | 54 | 2.8819 | 0.185 | 0.8672 | 6.5521 | 0.185 | 0.0488 | 0.2479 | 0.7405 |
| No log | 18.96 | 57 | 2.8817 | 0.185 | 0.8671 | 6.5498 | 0.185 | 0.0488 | 0.2536 | 0.7385 |
| No log | 19.96 | 60 | 2.8797 | 0.185 | 0.8667 | 6.5563 | 0.185 | 0.0488 | 0.2442 | 0.7371 |
| No log | 20.96 | 63 | 2.8784 | 0.185 | 0.8666 | 6.6145 | 0.185 | 0.0488 | 0.2528 | 0.7374 |
| No log | 21.96 | 66 | 2.8770 | 0.185 | 0.8663 | 6.6084 | 0.185 | 0.0488 | 0.2489 | 0.7366 |
| No log | 22.96 | 69 | 2.8760 | 0.185 | 0.8662 | 6.5683 | 0.185 | 0.0488 | 0.2448 | 0.7360 |
| No log | 23.96 | 72 | 2.8756 | 0.185 | 0.8661 | 6.5544 | 0.185 | 0.0488 | 0.2450 | 0.7363 |
| No log | 24.96 | 75 | 2.8753 | 0.185 | 0.8660 | 6.5533 | 0.185 | 0.0488 | 0.2451 | 0.7363 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Papaker/Tsoy | Papaker | 2023-07-09T08:21:43Z | 0 | 0 | null | [
"music",
"ru",
"license:other",
"region:us"
] | null | 2023-07-09T08:18:03Z | ---
license: other
language:
- ru
tags:
- music
--- |
cagarraz/ppo-SnowballTarget | cagarraz | 2023-07-09T08:18:00Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-09T08:04:18Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cagarraz/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jordyvl/dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.5 | jordyvl | 2023-07-09T08:08:30Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T07:52:51Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9246
- Accuracy: 0.18
- Brier Loss: 0.8755
- Nll: 6.7967
- F1 Micro: 0.18
- F1 Macro: 0.0306
- Ece: 0.2497
- Aurc: 0.8499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 3.1239 | 0.145 | 0.8999 | 10.1580 | 0.145 | 0.0253 | 0.2222 | 0.8467 |
| No log | 1.96 | 6 | 3.0895 | 0.145 | 0.8946 | 10.5934 | 0.145 | 0.0253 | 0.2303 | 0.8470 |
| No log | 2.96 | 9 | 3.0385 | 0.165 | 0.8866 | 8.6307 | 0.165 | 0.0502 | 0.2200 | 0.8458 |
| No log | 3.96 | 12 | 2.9972 | 0.21 | 0.8806 | 6.5449 | 0.2100 | 0.0615 | 0.2512 | 0.8364 |
| No log | 4.96 | 15 | 2.9719 | 0.155 | 0.8776 | 6.7565 | 0.155 | 0.0271 | 0.2414 | 0.8884 |
| No log | 5.96 | 18 | 2.9579 | 0.215 | 0.8768 | 7.0870 | 0.2150 | 0.0643 | 0.2713 | 0.8778 |
| No log | 6.96 | 21 | 2.9485 | 0.18 | 0.8768 | 7.0291 | 0.18 | 0.0308 | 0.2482 | 0.8532 |
| No log | 7.96 | 24 | 2.9417 | 0.18 | 0.8770 | 6.9706 | 0.18 | 0.0306 | 0.2559 | 0.8525 |
| No log | 8.96 | 27 | 2.9360 | 0.18 | 0.8768 | 6.9349 | 0.18 | 0.0306 | 0.2498 | 0.8527 |
| No log | 9.96 | 30 | 2.9326 | 0.18 | 0.8767 | 6.9268 | 0.18 | 0.0306 | 0.2635 | 0.8533 |
| No log | 10.96 | 33 | 2.9303 | 0.18 | 0.8765 | 6.9226 | 0.18 | 0.0306 | 0.2637 | 0.8531 |
| No log | 11.96 | 36 | 2.9289 | 0.18 | 0.8764 | 6.9217 | 0.18 | 0.0306 | 0.2591 | 0.8524 |
| No log | 12.96 | 39 | 2.9279 | 0.18 | 0.8762 | 6.8547 | 0.18 | 0.0306 | 0.2505 | 0.8526 |
| No log | 13.96 | 42 | 2.9270 | 0.18 | 0.8760 | 6.8491 | 0.18 | 0.0306 | 0.2500 | 0.8520 |
| No log | 14.96 | 45 | 2.9263 | 0.18 | 0.8759 | 6.8471 | 0.18 | 0.0306 | 0.2463 | 0.8518 |
| No log | 15.96 | 48 | 2.9258 | 0.18 | 0.8758 | 6.8445 | 0.18 | 0.0306 | 0.2462 | 0.8520 |
| No log | 16.96 | 51 | 2.9255 | 0.18 | 0.8758 | 6.8452 | 0.18 | 0.0306 | 0.2587 | 0.8511 |
| No log | 17.96 | 54 | 2.9256 | 0.18 | 0.8758 | 6.7940 | 0.18 | 0.0306 | 0.2585 | 0.8513 |
| No log | 18.96 | 57 | 2.9256 | 0.18 | 0.8758 | 6.7930 | 0.18 | 0.0306 | 0.2625 | 0.8508 |
| No log | 19.96 | 60 | 2.9252 | 0.18 | 0.8757 | 6.7945 | 0.18 | 0.0306 | 0.2580 | 0.8506 |
| No log | 20.96 | 63 | 2.9250 | 0.18 | 0.8756 | 6.7999 | 0.18 | 0.0306 | 0.2539 | 0.8505 |
| No log | 21.96 | 66 | 2.9248 | 0.18 | 0.8756 | 6.8441 | 0.18 | 0.0306 | 0.2538 | 0.8502 |
| No log | 22.96 | 69 | 2.9247 | 0.18 | 0.8755 | 6.8439 | 0.18 | 0.0306 | 0.2497 | 0.8500 |
| No log | 23.96 | 72 | 2.9247 | 0.18 | 0.8755 | 6.7977 | 0.18 | 0.0306 | 0.2497 | 0.8500 |
| No log | 24.96 | 75 | 2.9246 | 0.18 | 0.8755 | 6.7967 | 0.18 | 0.0306 | 0.2497 | 0.8499 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
chunwoolee0/my_awesome_qa_model | chunwoolee0 | 2023-07-09T08:00:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-09T07:50:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.2632 |
| 2.6568 | 2.0 | 500 | 1.6629 |
| 2.6568 | 3.0 | 750 | 1.5944 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
daiwenbin/distilbert-base-uncased-finetuned-clinc | daiwenbin | 2023-07-09T07:15:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T02:43:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9138709677419354
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Accuracy: 0.9139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2795 | 0.7277 |
| 3.7861 | 2.0 | 636 | 1.8741 | 0.8294 |
| 3.7861 | 3.0 | 954 | 1.1621 | 0.8906 |
| 1.6946 | 4.0 | 1272 | 0.8663 | 0.9058 |
| 0.9106 | 5.0 | 1590 | 0.7816 | 0.9139 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
abdulfatir/NCDSSM | abdulfatir | 2023-07-09T07:00:23Z | 0 | 2 | null | [
"arxiv:2301.11308",
"license:mit",
"region:us"
] | null | 2023-07-09T06:54:08Z | ---
license: mit
---
# Neural Continuous-Discrete State Space Models (NCDSSM)
This repository contains pretrained checkpoints for reproducing the experiments presented in the ICML 2023 paper [*Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series*](https://arxiv.org/abs/2301.11308). For details on how to use these checkpoints, please refer to https://github.com/clear-nus/NCDSSM.
|
Dorost/resume | Dorost | 2023-07-09T06:46:02Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-30T10:41:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: resume
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resume
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0166
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0448 | 1.0 | 49 | 2.7245 | 0.1290 |
| 2.2276 | 2.0 | 98 | 1.7165 | 0.4683 |
| 1.116 | 3.0 | 147 | 0.8720 | 0.8333 |
| 0.5606 | 4.0 | 196 | 0.3686 | 1.0 |
| 0.2374 | 5.0 | 245 | 0.1431 | 1.0 |
| 0.1084 | 6.0 | 294 | 0.0612 | 1.0 |
| 0.0598 | 7.0 | 343 | 0.0328 | 1.0 |
| 0.0386 | 8.0 | 392 | 0.0216 | 1.0 |
| 0.0276 | 9.0 | 441 | 0.0175 | 1.0 |
| 0.0271 | 10.0 | 490 | 0.0166 | 1.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/Pika_v1 | digiplay | 2023-07-09T06:44:58Z | 289 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-22T13:13:29Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/47067?modelVersionId=51650
Original Author's DEMO images :


|
digiplay/Pika_v2 | digiplay | 2023-07-09T06:40:08Z | 327 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-22T13:14:53Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/47067?modelVersionId=71733
Sample images I made :
8k Angel rainbow sky, ultra detailed ,upper body ,(realistic :0.4)

8k Angel rainbow sky, ultra detailed ,upper body ,(realistic :1.4)

8k Angel rainbow sky, ultra detailed ,upper body ,(realistic :1.4),wide-angle

Original Author's DEMO image :

|
YeungNLP/Ziya-LLaMA-13B-Pretrain-v1 | YeungNLP | 2023-07-09T06:35:20Z | 12 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-21T10:35:14Z | 由[IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)与原始llama权重进行合并而得到。
[firefly-ziya-13b](https://huggingface.co/YeungNLP/firefly-ziya-13b)基于该模型进行指令微调
更多详情请查看[Firefly项目](https://github.com/yangjianxin1/Firefly) |
laserchalk/kangaroo-0-5-training | laserchalk | 2023-07-09T06:31:28Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-09T06:26:39Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kangaroo-0.5-training Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
KKSK2023/ppo-LunarLander-v2 | KKSK2023 | 2023-07-09T06:27:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T06:27:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.57 +/- 19.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
demelianov/model | demelianov | 2023-07-09T06:27:04Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T05:14:01Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - demelianov/model
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
NasimB/gpt2-concat-mod-datasets-rarity1-rarity-all-13k-2p6k | NasimB | 2023-07-09T06:04:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T04:14:33Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-mod-datasets-rarity1-rarity-all-13k-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-mod-datasets-rarity1-rarity-all-13k-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7973 | 0.32 | 500 | 5.8474 |
| 5.4953 | 0.65 | 1000 | 5.4602 |
| 5.1505 | 0.97 | 1500 | 5.2610 |
| 4.8711 | 1.29 | 2000 | 5.1460 |
| 4.7547 | 1.61 | 2500 | 5.0485 |
| 4.6592 | 1.94 | 3000 | 4.9997 |
| 4.4552 | 2.26 | 3500 | 4.9771 |
| 4.4024 | 2.58 | 4000 | 4.9469 |
| 4.3565 | 2.91 | 4500 | 4.8791 |
| 4.1703 | 3.23 | 5000 | 4.9096 |
| 4.1146 | 3.55 | 5500 | 4.8802 |
| 4.097 | 3.88 | 6000 | 4.8532 |
| 3.9182 | 4.2 | 6500 | 4.8784 |
| 3.8312 | 4.52 | 7000 | 4.8790 |
| 3.8217 | 4.84 | 7500 | 4.8563 |
| 3.6814 | 5.17 | 8000 | 4.8842 |
| 3.5716 | 5.49 | 8500 | 4.9002 |
| 3.563 | 5.81 | 9000 | 4.8909 |
| 3.4914 | 6.14 | 9500 | 4.9122 |
| 3.407 | 6.46 | 10000 | 4.9184 |
| 3.4075 | 6.78 | 10500 | 4.9186 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
yodi/gpt-2-finetuned-papers | yodi | 2023-07-09T05:54:05Z | 62 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T02:51:49Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: yodi/gpt-2-finetuned-papers
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yodi/gpt-2-finetuned-papers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9448
- Validation Loss: 1.8459
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4234 | 2.1273 | 0 |
| 2.1829 | 1.9976 | 1 |
| 2.0794 | 1.9288 | 2 |
| 2.0208 | 1.8907 | 3 |
| 1.9872 | 1.8705 | 4 |
| 1.9680 | 1.8579 | 5 |
| 1.9572 | 1.8519 | 6 |
| 1.9511 | 1.8491 | 7 |
| 1.9478 | 1.8471 | 8 |
| 1.9458 | 1.8464 | 9 |
| 1.9448 | 1.8459 | 10 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yunkai/yolo-lightnet | yunkai | 2023-07-09T05:47:52Z | 0 | 1 | null | [
"darknet",
"yolo",
"object-detection",
"license:apache-2.0",
"region:us"
] | object-detection | 2023-07-06T09:38:20Z | ---
license: apache-2.0
tags:
- darknet
- yolo
pipeline_tag: object-detection
---
# yolo-lightnet
<!-- Provide a quick summary of what the model is/does. -->
This is an optimized YOLO model. It is optimized for running on NVDLA.
*NOTE: This is darknet format **NOT** Pytorch.*
## Model Details
<!-- Provide a longer summary of what this model is. -->
File name is formatted like `lightnet-{name}-{resolution}.weights`.
### driving
- target: car, bus, person, bike, truck, motor, train, rider, traffic_sign, traffic_light
- training data: BDD100K
### face
- target: face
- train data: wider face
### head_body
- target: head, body(include hidden area)
- train data: crowdhuman
### head_body-visible
- target: head, body(only visible area)
- train data: crowdhuman
- cfg file and label names are same as `head body`
## Uses
Running on:
https://github.com/daniel89710/lightNet-TRT
https://github.com/AlexeyAB/darknet
|
luhx/Reinforce-CartPole-v1 | luhx | 2023-07-09T05:09:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T05:08:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 486.50 +/- 40.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jason1i/whisper-tiny-minds14 | jason1i | 2023-07-09T05:01:53Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-09T04:37:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34415584415584416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6338
- Wer Ortho: 0.3467
- Wer: 0.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.001 | 17.86 | 500 | 0.6338 | 0.3467 | 0.3442 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PhantasyMaker/Kate | PhantasyMaker | 2023-07-09T03:55:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T03:55:50Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-aochildes-length-15k | NasimB | 2023-07-09T03:36:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T01:38:55Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-length-15k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-length-15k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7208 | 0.29 | 500 | 5.6413 |
| 5.3798 | 0.59 | 1000 | 5.2022 |
| 5.026 | 0.88 | 1500 | 4.9544 |
| 4.7535 | 1.18 | 2000 | 4.8031 |
| 4.5938 | 1.47 | 2500 | 4.6839 |
| 4.4847 | 1.76 | 3000 | 4.5811 |
| 4.3568 | 2.06 | 3500 | 4.5046 |
| 4.1613 | 2.35 | 4000 | 4.4593 |
| 4.1394 | 2.65 | 4500 | 4.4021 |
| 4.0897 | 2.94 | 5000 | 4.3497 |
| 3.874 | 3.24 | 5500 | 4.3454 |
| 3.8331 | 3.53 | 6000 | 4.3191 |
| 3.8104 | 3.82 | 6500 | 4.2890 |
| 3.6885 | 4.12 | 7000 | 4.2909 |
| 3.5369 | 4.41 | 7500 | 4.2866 |
| 3.5339 | 4.71 | 8000 | 4.2735 |
| 3.5159 | 5.0 | 8500 | 4.2598 |
| 3.3458 | 5.29 | 9000 | 4.2780 |
| 3.3397 | 5.59 | 9500 | 4.2764 |
| 3.3365 | 5.88 | 10000 | 4.2765 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
saintzeno/custom-LunarLander-v2 | saintzeno | 2023-07-09T03:01:10Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T03:01:07Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -89.33 +/- 50.83
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'saintzeno/custom-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
gosorio/minilmFT_TripAdvisor_Sentiment | gosorio | 2023-07-09T02:33:07Z | 185 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:argilla/tripadvisor-hotel-reviews",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-09T02:16:11Z | ---
datasets:
- argilla/tripadvisor-hotel-reviews
language:
- en
metrics:
- accuracy: 0.9018
- F-1 score: 0.8956
pipeline_tag: text-classification
---
Sentiment analysis model that uses MiniLM pre-trained (from https://huggingface.co/microsoft/MiniLM-L12-H384-uncased), and fine-tuned on a dataset containing Trip Advisor reviews (from https://www.kaggle.com/datasets/arnabchaki/tripadvisor-reviews-2023).
Reviews with 1 or 2 stars are considered 'Negative', 3 stars are 'Neutral', and 4 or 5 stars are 'Positive'.
Should be loaded with the following code:
```
# Load pre-trained model and tokenizer
model_name = "gosorio/minilmFT_TripAdvisor_Sentiment"
tokenizer_name = "microsoft/MiniLM-L12-H384-uncased" # the standard MiniLM
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3).to(device)
``` |
sachiniyer/tweet_toxicity | sachiniyer | 2023-07-09T02:13:20Z | 128 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-23T09:37:30Z | ---
datasets:
- jigsaw_toxicity_pred
metrics:
- accuracy
- bertscore
--- |
saintzeno/ppo-Pyramids | saintzeno | 2023-07-09T01:44:03Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-09T01:43:57Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: saintzeno/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/blllsh2021vrrvcv2600pchshstpn | nolanaatama | 2023-07-09T00:33:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-24T07:33:08Z | ---
license: creativeml-openrail-m
---
|
KennethEnevoldsen/dfm-sentence-encoder-medium | KennethEnevoldsen | 2023-07-09T00:16:21Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-09T00:16:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# KennethEnevoldsen/dfm-sentence-encoder-medium
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('KennethEnevoldsen/dfm-sentence-encoder-medium')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('KennethEnevoldsen/dfm-sentence-encoder-medium')
model = AutoModel.from_pretrained('KennethEnevoldsen/dfm-sentence-encoder-medium')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=KennethEnevoldsen/dfm-sentence-encoder-medium)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12500 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
skywalker7/LunarWalker | skywalker7 | 2023-07-08T23:40:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T23:40:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.93 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ABDUULAHH/ABDULLAH-GPT | ABDUULAHH | 2023-07-08T23:23:25Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T23:23:25Z | ---
license: bigscience-openrail-m
---
|
renatostrianese/ppo-Huggy | renatostrianese | 2023-07-08T23:20:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T23:20:16Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: renatostrianese/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gvenkat21/reviews-feedback-nudge | gvenkat21 | 2023-07-08T23:11:15Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T22:08:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
digiplay/Realisian_v1 | digiplay | 2023-07-08T22:59:08Z | 296 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T15:08:17Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/47130?modelVersionId=51711
Sample image I made :


|
elinas/chronos-13b-8k-GPTQ | elinas | 2023-07-08T22:48:34Z | 15 | 3 | transformers | [
"transformers",
"llama",
"text-generation",
"chatbot",
"gptq",
"storywriting",
"custom_code",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T23:31:35Z | ---
license: other
tags:
- chatbot
- gptq
- storywriting
---
# chronos-13b-8K-4bit
The original Chronos-13B model was merged with a LoRA trained on a majority of 1500 samples in the 8000 token range in the same style, with a cutoff of 8k tokens in full 8bit. It is meant to be used standalone, but if you would like the LoRA to merge/combine on your own, you can find it here https://huggingface.co/ZeusLabs/chronos-13b-8k-lora
The `config.json` includes modifications allowing extended context so you will need to use it with `trust_remote_code` if not using Exllama.
4bit (int4) quantized version using `true-sequential` and `groupsize 128` of https://huggingface.co/elinas/chronos-13b plus https://huggingface.co/ZeusLabs/chronos-13b-8k-lora
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[Zeus Labs Discord](https://discord.gg/76e2HBzRKD) |
hongrui/chest_mimic_v_1 | hongrui | 2023-07-08T22:39:07Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T13:09:12Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/chest_mimic_v_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mimic_chest_xray_v_1 dataset. You can find some example images in the following.




|
hbenitez/food_classifier | hbenitez | 2023-07-08T22:37:36Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T21:28:12Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hbenitez/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hbenitez/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3735
- Validation Loss: 2.5622
- Train Accuracy: 0.0769
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 260, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5417 | 2.5922 | 0.0 | 0 |
| 2.5103 | 2.5856 | 0.0 | 1 |
| 2.4593 | 2.5738 | 0.0 | 2 |
| 2.4104 | 2.5671 | 0.0 | 3 |
| 2.3735 | 2.5622 | 0.0769 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0-rc2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/bert-finetuned-hausa | grace-pro | 2023-07-08T22:07:37Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-07T21:03:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-hausa
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- Precision: 0.6680
- Recall: 0.4474
- F1: 0.5359
- Accuracy: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1683 | 1.0 | 2624 | 0.1589 | 0.6480 | 0.3641 | 0.4663 | 0.9513 |
| 0.1446 | 2.0 | 5248 | 0.1509 | 0.6658 | 0.4147 | 0.5111 | 0.9543 |
| 0.1163 | 3.0 | 7872 | 0.1505 | 0.6680 | 0.4474 | 0.5359 | 0.9557 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Hariharavarshan/Cover_genie | Hariharavarshan | 2023-07-08T21:48:30Z | 172 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"arxiv:2210.11416",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-03T06:27:28Z | ---
license: apache-2.0
language:
- en
metrics:
- rouge
library_name: transformers
---
# Model Card for CoverGenie
<!-- Provide a quick summary of what the model is/does. -->
The goal of this project is to build a fine-grained mini-ChatGPT (named “CoverGenie”) , which is designed to generate resumes and cover letters based on job descriptions from the tech field.
By nature,it is a language generation task, and it takes the job description as input to a sequence of text and turns it into a structured, certain style of resumes and cover letters.
This might involve parameter efficient finetuning, reinforcement learning and prompting engineering to some extent.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** T5 (Text-to-Text-Transfer-Transformer)
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0
- **Finetuned from model:** FlanT5 Large
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** https://arxiv.org/pdf/2210.11416.pdf
## Uses
It Can Generate Cover letter if we are able to input the **Job description** and **Resume** of a candidate.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import GenerationConfig
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import GenerationConfig
import nltk
nltk.download('punkt')
max_source_length=512
tokenizer = AutoTokenizer.from_pretrained("Hariharavarshan/Cover_genie")
model = AutoModelForSeq2SeqLM.from_pretrained("Hariharavarshan/Cover_genie")
JD='''<Job description Text>'''
resume_text= '''<Resume Text>'''
final_text="give me a cover letter based on the a job description and a resume. Job description:"+JD +" Resume:"+ resume_text
generation_config = GenerationConfig.from_pretrained("google/flan-t5-large",temperature=2.0)
inputs = tokenizer(final_text, max_length=max_source_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=3, do_sample=True, min_length=1000,
max_length=10000,generation_config=generation_config,num_return_sequences=3)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
generated_Coverletter = nltk.sent_tokenize(decoded_output.strip())
```
**Developed by:** Hariharavarshan,Jayathilaga,Sara,Meiyu
|
rsilg/Reinforce-CartPole-v1 | rsilg | 2023-07-08T21:28:20Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T21:28:11Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
varcoder/segformer-b0-DeepCrack | varcoder | 2023-07-08T21:22:38Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T00:57:37Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-DeepCrack
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-DeepCrack
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3347
- Mean Iou: 0.6839
- Mean Accuracy: 0.7408
- Overall Accuracy: 0.9681
- Accuracy Background: 0.9897
- Accuracy Crack: 0.4918
- Iou Background: 0.9674
- Iou Crack: 0.4003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Crack | Iou Background | Iou Crack |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:--------------:|:--------------:|:---------:|
| 0.8203 | 0.03 | 5 | 0.6973 | 0.3317 | 0.7410 | 0.5924 | 0.5783 | 0.9037 | 0.5758 | 0.0876 |
| 0.7469 | 0.07 | 10 | 0.6930 | 0.3533 | 0.7185 | 0.6325 | 0.6244 | 0.8125 | 0.6192 | 0.0873 |
| 0.7324 | 0.1 | 15 | 0.6884 | 0.3545 | 0.6605 | 0.6436 | 0.6421 | 0.6788 | 0.6329 | 0.0762 |
| 0.7079 | 0.13 | 20 | 0.6910 | 0.2537 | 0.5518 | 0.4726 | 0.4650 | 0.6386 | 0.4576 | 0.0498 |
| 0.6472 | 0.17 | 25 | 0.6831 | 0.2972 | 0.5734 | 0.5519 | 0.5498 | 0.5969 | 0.5400 | 0.0545 |
| 0.6344 | 0.2 | 30 | 0.6630 | 0.4652 | 0.7477 | 0.8045 | 0.8099 | 0.6854 | 0.7985 | 0.1318 |
| 0.6264 | 0.23 | 35 | 0.6389 | 0.5567 | 0.7850 | 0.8977 | 0.9084 | 0.6617 | 0.8947 | 0.2187 |
| 0.5811 | 0.27 | 40 | 0.6087 | 0.6070 | 0.8069 | 0.9279 | 0.9394 | 0.6745 | 0.9257 | 0.2882 |
| 0.5928 | 0.3 | 45 | 0.5584 | 0.6469 | 0.7851 | 0.9503 | 0.9660 | 0.6042 | 0.9490 | 0.3448 |
| 0.5312 | 0.33 | 50 | 0.5476 | 0.6508 | 0.7789 | 0.9527 | 0.9692 | 0.5886 | 0.9515 | 0.3502 |
| 0.5209 | 0.37 | 55 | 0.5423 | 0.6561 | 0.7665 | 0.9564 | 0.9744 | 0.5586 | 0.9553 | 0.3568 |
| 0.4675 | 0.4 | 60 | 0.5332 | 0.6470 | 0.7529 | 0.9553 | 0.9745 | 0.5313 | 0.9543 | 0.3397 |
| 0.4831 | 0.43 | 65 | 0.4772 | 0.6746 | 0.7502 | 0.9644 | 0.9847 | 0.5157 | 0.9636 | 0.3855 |
| 0.4512 | 0.47 | 70 | 0.4624 | 0.6734 | 0.7830 | 0.9598 | 0.9765 | 0.5895 | 0.9587 | 0.3881 |
| 0.426 | 0.5 | 75 | 0.4589 | 0.6688 | 0.7912 | 0.9572 | 0.9730 | 0.6094 | 0.9561 | 0.3815 |
| 0.4147 | 0.53 | 80 | 0.4529 | 0.6769 | 0.7846 | 0.9606 | 0.9773 | 0.5918 | 0.9596 | 0.3942 |
| 0.4144 | 0.57 | 85 | 0.4160 | 0.6767 | 0.7616 | 0.9635 | 0.9827 | 0.5405 | 0.9627 | 0.3908 |
| 0.4192 | 0.6 | 90 | 0.3747 | 0.6612 | 0.7271 | 0.9639 | 0.9863 | 0.4680 | 0.9631 | 0.3593 |
| 0.4294 | 0.63 | 95 | 0.3649 | 0.6495 | 0.7064 | 0.9637 | 0.9880 | 0.4247 | 0.9630 | 0.3359 |
| 0.3609 | 0.67 | 100 | 0.3730 | 0.6480 | 0.7003 | 0.9642 | 0.9893 | 0.4113 | 0.9636 | 0.3324 |
| 0.3782 | 0.7 | 105 | 0.3699 | 0.6584 | 0.7229 | 0.9637 | 0.9865 | 0.4592 | 0.9630 | 0.3538 |
| 0.3594 | 0.73 | 110 | 0.3505 | 0.6638 | 0.7161 | 0.9662 | 0.9899 | 0.4423 | 0.9656 | 0.3619 |
| 0.3966 | 0.77 | 115 | 0.3474 | 0.6720 | 0.7263 | 0.9670 | 0.9898 | 0.4627 | 0.9663 | 0.3776 |
| 0.3365 | 0.8 | 120 | 0.3598 | 0.6710 | 0.7185 | 0.9678 | 0.9915 | 0.4456 | 0.9672 | 0.3748 |
| 0.3497 | 0.83 | 125 | 0.3530 | 0.6752 | 0.7161 | 0.9692 | 0.9932 | 0.4389 | 0.9686 | 0.3817 |
| 0.3303 | 0.87 | 130 | 0.3424 | 0.6792 | 0.7247 | 0.9690 | 0.9922 | 0.4572 | 0.9684 | 0.3899 |
| 0.3702 | 0.9 | 135 | 0.3379 | 0.6823 | 0.7341 | 0.9686 | 0.9908 | 0.4774 | 0.9679 | 0.3967 |
| 0.3199 | 0.93 | 140 | 0.3317 | 0.6858 | 0.7468 | 0.9678 | 0.9888 | 0.5048 | 0.9671 | 0.4044 |
| 0.304 | 0.97 | 145 | 0.3189 | 0.6854 | 0.7408 | 0.9685 | 0.9900 | 0.4916 | 0.9678 | 0.4030 |
| 0.3392 | 1.0 | 150 | 0.3347 | 0.6839 | 0.7408 | 0.9681 | 0.9897 | 0.4918 | 0.9674 | 0.4003 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
voyzan/unit1-lunar_lander_v2-A01 | voyzan | 2023-07-08T21:03:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T21:02:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.36 +/- 17.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-Anymal-PPO | skrl | 2023-07-08T20:48:53Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:41:14Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 61.68 +/- 2.18
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Anymal
type: IsaacGymEnvs-Anymal
---
<!-- ---
torch: 61.68 +/- 2.18
jax: 61.31 +/- 1.39
numpy: 59.62 +/- 1.85
--- -->
# IsaacGymEnvs-Anymal-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Anymal
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Anymal-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Anymal-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 24 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 3 # 24 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
RajkNakka/ppo-LunarLander-v2-unit-8 | RajkNakka | 2023-07-08T20:48:52Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:55:27Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 7.16 +/- 73.94
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Huggingfly/ppo-SnowballTarget | Huggingfly | 2023-07-08T20:43:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-08T19:47:44Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Huggingfly/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
earentilt/LunarLander-v2 | earentilt | 2023-07-08T20:36:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T19:51:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.97 +/- 14.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-Ingenuity-PPO | skrl | 2023-07-08T20:24:38Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:44:57Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 7162.47 +/- 555.5
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Ingenuity
type: IsaacGymEnvs-Ingenuity
---
<!-- ---
torch: 7018.19 +/- 508.68
jax: 7041.64 +/- 297.51
numpy: 7162.47 +/- 555.5
--- -->
# IsaacGymEnvs-Ingenuity-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Ingenuity
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ingenuity-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ingenuity-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 4 # 16 * 4096 / 16384
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-3
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.016}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
Word2vec/wikipedia2vec_enwiki_20180420_500d | Word2vec | 2023-07-08T20:18:13Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T19:36:28Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_500d", filename="enwiki_20180420_500d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
tyavika/LR1E5-BS8-Distil-CNN512LSTM256NoBi | tyavika | 2023-07-08T20:04:23Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T16:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E5-BS8-Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E5-BS8-Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7781 | 1.0 | 6580 | 1.6331 |
| 1.235 | 2.0 | 13160 | 1.2036 |
| 0.951 | 3.0 | 19740 | 1.1857 |
| 0.7847 | 4.0 | 26320 | 1.2156 |
| 0.6643 | 5.0 | 32900 | 1.3047 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fobt/speecht5_finetuned_voxpopuli_nl | fobt | 2023-07-08T19:59:00Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-08T17:41:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5237 | 4.3 | 1000 | 0.4782 |
| 0.4946 | 8.61 | 2000 | 0.4639 |
| 0.493 | 12.91 | 3000 | 0.4608 |
| 0.4903 | 17.21 | 4000 | 0.4585 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_enwiki_20180420_win10_300d | Word2vec | 2023-07-08T19:52:01Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T14:27:19Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_win10_300d", filename="enwiki_20180420_win10_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
snousias/bert-base-greek-uncased-v1-finetuned-polylex | snousias | 2023-07-08T19:50:38Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T19:48:32Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v1-finetuned-polylex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v1-finetuned-polylex
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1637 | 1.0 | 12 | 2.6649 |
| 3.0581 | 2.0 | 24 | 2.5475 |
| 2.648 | 3.0 | 36 | 2.1624 |
| 2.5983 | 4.0 | 48 | 2.3285 |
| 2.7524 | 5.0 | 60 | 2.5745 |
| 2.4923 | 6.0 | 72 | 2.8096 |
| 2.5336 | 7.0 | 84 | 2.9470 |
| 2.3271 | 8.0 | 96 | 2.5497 |
| 2.4018 | 9.0 | 108 | 2.3413 |
| 2.544 | 10.0 | 120 | 2.4170 |
| 1.9144 | 11.0 | 132 | 2.5254 |
| 2.0996 | 12.0 | 144 | 2.4147 |
| 1.8733 | 13.0 | 156 | 2.5462 |
| 1.8261 | 14.0 | 168 | 2.2045 |
| 2.0033 | 15.0 | 180 | 1.9549 |
| 1.9967 | 16.0 | 192 | 2.1614 |
| 1.8515 | 17.0 | 204 | 2.8167 |
| 1.8583 | 18.0 | 216 | 2.8441 |
| 1.7512 | 19.0 | 228 | 2.4536 |
| 1.5746 | 20.0 | 240 | 2.6204 |
| 1.5267 | 21.0 | 252 | 2.9290 |
| 1.7248 | 22.0 | 264 | 2.0433 |
| 1.5692 | 23.0 | 276 | 2.4710 |
| 1.6093 | 24.0 | 288 | 2.4340 |
| 1.619 | 25.0 | 300 | 2.2689 |
| 1.4406 | 26.0 | 312 | 3.6729 |
| 1.5452 | 27.0 | 324 | 3.2225 |
| 1.4575 | 28.0 | 336 | 1.8853 |
| 1.5534 | 29.0 | 348 | 2.2135 |
| 1.4872 | 30.0 | 360 | 2.7540 |
| 1.3923 | 31.0 | 372 | 2.2408 |
| 1.3682 | 32.0 | 384 | 2.5181 |
| 1.2623 | 33.0 | 396 | 2.1360 |
| 1.1888 | 34.0 | 408 | 2.3912 |
| 1.3427 | 35.0 | 420 | 2.4600 |
| 1.1969 | 36.0 | 432 | 2.6388 |
| 1.3367 | 37.0 | 444 | 2.5489 |
| 1.226 | 38.0 | 456 | 1.5805 |
| 1.1808 | 39.0 | 468 | 2.7466 |
| 1.1694 | 40.0 | 480 | 2.4887 |
| 1.2736 | 41.0 | 492 | 2.5735 |
| 1.2292 | 42.0 | 504 | 2.2357 |
| 1.2556 | 43.0 | 516 | 2.9244 |
| 1.0155 | 44.0 | 528 | 1.8348 |
| 1.2425 | 45.0 | 540 | 2.4494 |
| 1.2665 | 46.0 | 552 | 2.4866 |
| 1.3439 | 47.0 | 564 | 2.3430 |
| 1.4468 | 48.0 | 576 | 1.7801 |
| 1.1772 | 49.0 | 588 | 2.5785 |
| 1.0618 | 50.0 | 600 | 2.9959 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
camus-ng/dreambooth_lora_cory_v15_ten | camus-ng | 2023-07-08T19:43:42Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T16:25:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/dreambooth_lora_cory_v15_ten
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
nolanaatama/drkrvcsnpdgg | nolanaatama | 2023-07-08T19:41:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T19:34:06Z | ---
license: creativeml-openrail-m
---
|
jncraton/codet5p-220m-py-ct2-int8 | jncraton | 2023-07-08T19:12:46Z | 669 | 1 | transformers | [
"transformers",
"arxiv:2305.07922",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-06-30T18:48:16Z | ---
license: bsd-3-clause
---
# CodeT5+ 220M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (i.e. InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-220m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 12.0% pass@1 on HumanEval in the zero-shot setting, which outperforms much larger LLMs such as Incoder 1.3B’s 8.9%, GPT-Neo 2.7B's 6.4%, and GPT-J 6B's 11.6%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
Word2vec/wikipedia2vec_enwiki_20180420_100d | Word2vec | 2023-07-08T19:12:30Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:06Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_100d", filename="enwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d | Word2vec | 2023-07-08T19:06:31Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T13:48:27Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d", filename="enwiki_20180420_nolg_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
c72599/a2c-PandaReachDense-v2 | c72599 | 2023-07-08T18:55:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:53:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.94 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cagarraz/Reinforce-12356 | cagarraz | 2023-07-08T18:52:35Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:48:43Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-12356
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.30 +/- 8.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cagarraz/Reinforce-1234 | cagarraz | 2023-07-08T18:41:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T16:38:24Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1234
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 34.70 +/- 15.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NERO500/q-FrozenLake-v1-4x4-noSlippery | NERO500 | 2023-07-08T18:39:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:39:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NERO500/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Word2vec/wikipedia2vec_plwiki_20180420_300d | Word2vec | 2023-07-08T18:36:17Z | 0 | 0 | null | [
"word2vec",
"pl",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:52:46Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- pl
---
## Information
Pretrained Word2vec in Polish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_plwiki_20180420_300d", filename="plwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_arwiki_20180420_300d | Word2vec | 2023-07-08T18:34:15Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:33:09Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_300d", filename="arwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_zhwiki_20180420_300d | Word2vec | 2023-07-08T18:32:34Z | 0 | 1 | null | [
"word2vec",
"zh",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:42:06Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- zh
---
## Information
Pretrained Word2vec in Chinese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_zhwiki_20180420_300d", filename="zhwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_arwiki_20180420_100d | Word2vec | 2023-07-08T18:29:53Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T16:51:26Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_100d", filename="arwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_nlwiki_20180420_100d | Word2vec | 2023-07-08T18:28:26Z | 0 | 0 | null | [
"word2vec",
"nl",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:01:21Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- nl
---
## Information
Pretrained Word2vec in Dutch. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_nlwiki_20180420_100d", filename="nlwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_plwiki_20180420_100d | Word2vec | 2023-07-08T18:21:29Z | 0 | 0 | null | [
"word2vec",
"pl",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:01:10Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- pl
---
## Information
Pretrained Word2vec in Polish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_plwiki_20180420_100d", filename="plwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
jason1i/whisper-small-zh-HK | jason1i | 2023-07-08T18:15:56Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hk",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-08T17:19:53Z | ---
language:
- hk
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small hk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: zh-HK
split: test
args: zh-HK
metrics:
- name: Wer
type: wer
value: 64.88393977415308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small hk
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2883
- Wer Ortho: 66.1207
- Wer: 64.8839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.3393 | 0.57 | 500 | 0.2883 | 66.1207 | 64.8839 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_frwiki_20180420_300d | Word2vec | 2023-07-08T18:13:55Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:15:49Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- fr
---
## Information
Pretrained Word2vec in French. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_frwiki_20180420_300d", filename="frwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_dewiki_20180420_300d | Word2vec | 2023-07-08T18:13:11Z | 0 | 0 | null | [
"word2vec",
"de",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:07:09Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- de
---
## Information
Pretrained Word2vec in German. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_dewiki_20180420_300d", filename="dewiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
HaziqRazali/ppo-Huggy | HaziqRazali | 2023-07-08T18:11:28Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T18:11:18Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HaziqRazali/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
abhi-8/DialoGPT-medium-Rick | abhi-8 | 2023-07-08T18:07:47Z | 135 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T09:27:00Z | ---
pipeline_tag: conversational
--- |
Word2vec/wikipedia2vec_ruwiki_20180420_300d | Word2vec | 2023-07-08T18:06:08Z | 0 | 0 | null | [
"word2vec",
"ru",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:51:48Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ru
---
## Information
Pretrained Word2vec in Russian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ruwiki_20180420_300d", filename="ruwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
skrl/IsaacGymEnvs-Ant-PPO | skrl | 2023-07-08T18:04:35Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T21:15:29Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 5094.76 +/- 310.06
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Ant
type: IsaacGymEnvs-Ant
---
<!-- ---
torch: 4996.72 +/- 273.16
jax: 5094.76 +/- 310.06
numpy: 4542.73 +/- 467.69
--- -->
# IsaacGymEnvs-Ant-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Ant
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ant-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ant-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 4
cfg["mini_batches"] = 2 # 16 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
TomyAI/NamedDiapers | TomyAI | 2023-07-08T18:00:45Z | 0 | 4 | null | [
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T08:11:24Z | ---
language:
- ja
thumbnail: NamedDiapers_1.png
license: creativeml-openrail-m
---
大人用(重要)おむつのLoraです。
ピーキーな出来になってしまったので作り直している最中ですが一旦公開します。
タグ:
- diaper
- babycuties
- bunnyhopps
- littlekings
- princesspink
- oyasumiman
 |
Word2vec/wikipedia2vec_eswiki_20180420_300d | Word2vec | 2023-07-08T17:58:18Z | 0 | 1 | null | [
"word2vec",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:53:59Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- es
---
## Information
Pretrained Word2vec in Spanish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_eswiki_20180420_300d", filename="eswiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
c72599/a2c-AntBulletEnv-v0 | c72599 | 2023-07-08T17:57:41Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:56:25Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1301.48 +/- 271.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Word2vec/wikipedia2vec_ruwiki_20180420_100d | Word2vec | 2023-07-08T17:51:41Z | 0 | 0 | null | [
"word2vec",
"ru",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:45Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ru
---
## Information
Pretrained Word2vec in Russian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ruwiki_20180420_100d", filename="ruwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_eswiki_20180420_100d | Word2vec | 2023-07-08T17:48:37Z | 0 | 1 | null | [
"word2vec",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:02:04Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- es
---
## Information
Pretrained Word2vec in Spanish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_eswiki_20180420_100d", filename="eswiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Teunis89/q-FrozenLake-v1-4x4-noSlippery | Teunis89 | 2023-07-08T17:45:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:45:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Teunis89/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dashan1992/dsl1 | dashan1992 | 2023-07-08T17:42:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T17:41:42Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
spitfire4794/dialogpt-small-rick | spitfire4794 | 2023-07-08T17:42:12Z | 139 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T11:42:03Z | ---
language:
- en
library_name: transformers
pipeline_tag: conversational
tags:
- gpt2
- pytorch
--- |
balacoon/frontend | balacoon | 2023-07-08T17:34:08Z | 0 | 0 | null | [
"Pronunciation Generation",
"Text Normalization",
"en",
"region:us"
] | null | 2022-10-02T17:27:58Z | ---
language:
- en
tags:
- Pronunciation Generation
- Text Normalization
---
# TTS Frontend
Here you can find addons compatible with
[balacoon_frontend](https://pypi.fury.io/balacoon/) python package.
Learn how to use TTS frontend from Balacoon at https://balacoon.com/use/.
List of available addons:
- <mark>en_us_frontend.addon</mark> - FST-based pronunciation generation
combined with FST-based text normalization. Former is based on
[CMUdict](https://github.com/cmusphinx/cmudict) and
[Phonetisaurus](https://github.com/cmusphinx/cmudict).
Latter is hand-crafted rules for [Sparrowhawk](https://github.com/google/sparrowhawk/).
For now addon is missing contextual pronunciation generation.
|
Subsets and Splits