modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
neopolita/q-FrozenLake-v1-4x4-noSlippery
|
neopolita
| 2023-12-06T15:07:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-06T15:07:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="neopolita/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jhsmith/finetuning_mixed
|
jhsmith
| 2023-12-06T15:06:20Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-02T23:15:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3057 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 200,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 0.0001
},
"scheduler": "warmuplinear",
"steps_per_epoch": null,
"warmup_steps": 305,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
TheBloke/Tess-34B-v1.4-GGUF
|
TheBloke
| 2023-12-06T14:55:56Z | 47 | 4 |
transformers
|
[
"transformers",
"gguf",
"yi",
"base_model:migtissera/Tess-34B-v1.4",
"base_model:quantized:migtissera/Tess-34B-v1.4",
"license:other",
"region:us"
] | null | 2023-12-06T14:35:19Z |
---
base_model: migtissera/Tess-34B-v1.4
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-34b
model_creator: Migel Tissera
model_name: Tess 34B v1.4
model_type: yi
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess 34B v1.4 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess 34B v1.4](https://huggingface.co/migtissera/Tess-34B-v1.4)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess 34B v1.4](https://huggingface.co/migtissera/Tess-34B-v1.4).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-34B-v1.4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-34B-v1.4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-34B-v1.4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-34b-v1.4.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [tess-34b-v1.4.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [tess-34b-v1.4.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [tess-34b-v1.4.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [tess-34b-v1.4.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-34b-v1.4.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [tess-34b-v1.4.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [tess-34b-v1.4.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-34b-v1.4.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [tess-34b-v1.4.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [tess-34b-v1.4.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [tess-34b-v1.4.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-34B-v1.4-GGUF/blob/main/tess-34b-v1.4.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-34B-v1.4-GGUF and below it, a specific filename to download, such as: tess-34b-v1.4.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-34B-v1.4-GGUF tess-34b-v1.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-34B-v1.4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-34B-v1.4-GGUF tess-34b-v1.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tess-34b-v1.4.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tess-34b-v1.4.Q4_K_M.gguf", # Download the model file first
n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tess-34b-v1.4.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess 34B v1.4
# Tess

Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-M-v1.4 was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<!-- original-model-card end -->
|
felipe-cmsa/google-play-sentiment-analysis
|
felipe-cmsa
| 2023-12-06T14:52:01Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T13:22:26Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: google-play-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-play-sentiment-analysis
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1514
- Accuracy: 0.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 1.9720 | 0.4263 |
| 0.518 | 2.0 | 500 | 2.1792 | 0.4647 |
| 0.518 | 3.0 | 750 | 2.6638 | 0.4383 |
| 0.2242 | 4.0 | 1000 | 2.9686 | 0.462 |
| 0.2242 | 5.0 | 1250 | 3.1514 | 0.4623 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
DianaJin/logi
|
DianaJin
| 2023-12-06T14:51:57Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:DianaJin/logi",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-06T07:59:50Z |
---
language:
- ko
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- DianaJin/logi
model-index:
- name: jinkrsmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jinkrsmodel
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the DianaJin/logi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2265
- Cer: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-----:|
| 0.0004 | 250.0 | 500 | 2.3248 | 100.0 |
| 0.0002 | 500.0 | 1000 | 2.3827 | 100.0 |
| 0.0002 | 750.0 | 1500 | 2.9600 | 100.0 |
| 0.0002 | 1000.0 | 2000 | 3.2265 | 100.0 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
smc/videomae-base-finetuned-ucf101-subset
|
smc
| 2023-12-06T14:47:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-11-29T17:25:01Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5324
- Accuracy: 0.7871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3271 | 0.13 | 19 | 2.1350 | 0.2429 |
| 1.9566 | 1.13 | 38 | 1.4260 | 0.5857 |
| 1.0941 | 2.13 | 57 | 0.8857 | 0.7143 |
| 0.5363 | 3.13 | 76 | 0.6664 | 0.7143 |
| 0.3392 | 4.13 | 95 | 0.4023 | 0.9143 |
| 0.1765 | 5.13 | 114 | 0.3487 | 0.9 |
| 0.1318 | 6.13 | 133 | 0.3181 | 0.9 |
| 0.1092 | 7.1 | 148 | 0.2886 | 0.9 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mireiaplalis/roberta-basefinetuned-ner-cadec
|
mireiaplalis
| 2023-12-06T14:37:49Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-06T14:27:34Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-basefinetuned-ner-cadec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-basefinetuned-ner-cadec
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3874
- Precision: 0.4370
- Recall: 0.4719
- F1: 0.4538
- Accuracy: 0.8849
- Adr Precision: 0.3917
- Adr Recall: 0.4477
- Adr F1: 0.4178
- Disease Precision: 0.0
- Disease Recall: 0.0
- Disease F1: 0.0
- Drug Precision: 0.7184
- Drug Recall: 0.7576
- Drug F1: 0.7375
- Finding Precision: 0.1389
- Finding Recall: 0.1111
- Finding F1: 0.1235
- Symptom Precision: 0.2353
- Symptom Recall: 0.1481
- Symptom F1: 0.1818
- B-adr Precision: 0.6259
- B-adr Recall: 0.6488
- B-adr F1: 0.6371
- B-disease Precision: 0.0
- B-disease Recall: 0.0
- B-disease F1: 0.0
- B-drug Precision: 0.8589
- B-drug Recall: 0.8485
- B-drug F1: 0.8537
- B-finding Precision: 0.4
- B-finding Recall: 0.1778
- B-finding F1: 0.2462
- B-symptom Precision: 0.2667
- B-symptom Recall: 0.16
- B-symptom F1: 0.2
- I-adr Precision: 0.3877
- I-adr Recall: 0.4305
- I-adr F1: 0.4079
- I-disease Precision: 0.0
- I-disease Recall: 0.0
- I-disease F1: 0.0
- I-drug Precision: 0.7456
- I-drug Recall: 0.7636
- I-drug F1: 0.7545
- I-finding Precision: 0.1429
- I-finding Recall: 0.125
- I-finding F1: 0.1333
- I-symptom Precision: 0.5
- I-symptom Recall: 0.1
- I-symptom F1: 0.1667
- Macro Avg F1: 0.3399
- Weighted Avg F1: 0.5527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
| No log | 1.0 | 127 | 0.5344 | 0.3114 | 0.2247 | 0.2611 | 0.8487 | 0.1715 | 0.1505 | 0.1603 | 0.0 | 0.0 | 0.0 | 0.98 | 0.5939 | 0.7396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5933 | 0.2380 | 0.3397 | 0.0 | 0.0 | 0.0 | 1.0 | 0.5939 | 0.7452 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1456 | 0.1347 | 0.1399 | 0.0 | 0.0 | 0.0 | 0.98 | 0.5939 | 0.7396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1965 | 0.3329 |
| No log | 2.0 | 254 | 0.4494 | 0.3603 | 0.2946 | 0.3242 | 0.8676 | 0.2676 | 0.2440 | 0.2553 | 0.0 | 0.0 | 0.0 | 0.6519 | 0.6242 | 0.6378 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5703 | 0.4280 | 0.4890 | 0.0 | 0.0 | 0.0 | 1.0 | 0.6182 | 0.7640 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2759 | 0.2296 | 0.2506 | 0.0 | 0.0 | 0.0 | 0.7342 | 0.7030 | 0.7183 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2222 | 0.4204 |
| No log | 3.0 | 381 | 0.4357 | 0.3508 | 0.3758 | 0.3629 | 0.8628 | 0.2656 | 0.3431 | 0.2994 | 0.0 | 0.0 | 0.0 | 0.7451 | 0.6909 | 0.7170 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5393 | 0.5662 | 0.5524 | 0.0 | 0.0 | 0.0 | 0.9375 | 0.7273 | 0.8191 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2402 | 0.3113 | 0.2712 | 0.0 | 0.0 | 0.0 | 0.7550 | 0.6909 | 0.7215 | 1.0 | 0.0312 | 0.0606 | 0.0 | 0.0 | 0.0 | 0.2425 | 0.4573 |
| 0.5429 | 4.0 | 508 | 0.4086 | 0.4501 | 0.4170 | 0.4329 | 0.8819 | 0.3612 | 0.3890 | 0.3746 | 0.0 | 0.0 | 0.0 | 0.7922 | 0.7394 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5988 | 0.5816 | 0.5901 | 0.0 | 0.0 | 0.0 | 0.9209 | 0.7758 | 0.8421 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3793 | 0.3642 | 0.3716 | 0.0 | 0.0 | 0.0 | 0.82 | 0.7455 | 0.7810 | 1.0 | 0.0312 | 0.0606 | 0.0 | 0.0 | 0.0 | 0.2645 | 0.5113 |
| 0.5429 | 5.0 | 635 | 0.3806 | 0.4225 | 0.4457 | 0.4338 | 0.8797 | 0.3398 | 0.4165 | 0.3743 | 0.0 | 0.0 | 0.0 | 0.7805 | 0.7758 | 0.7781 | 0.2 | 0.0222 | 0.0400 | 0.5 | 0.0370 | 0.0690 | 0.5844 | 0.6180 | 0.6007 | 0.0 | 0.0 | 0.0 | 0.8535 | 0.8121 | 0.8323 | 0.5 | 0.0222 | 0.0426 | 0.5 | 0.04 | 0.0741 | 0.3346 | 0.4018 | 0.3651 | 0.1667 | 0.0769 | 0.1053 | 0.8153 | 0.7758 | 0.7950 | 0.2 | 0.0312 | 0.0541 | 0.0 | 0.0 | 0.0 | 0.2869 | 0.5170 |
| 0.5429 | 6.0 | 762 | 0.3902 | 0.3860 | 0.4419 | 0.4121 | 0.8738 | 0.3329 | 0.4239 | 0.3729 | 0.0 | 0.0 | 0.0 | 0.6349 | 0.7273 | 0.6780 | 0.0833 | 0.0222 | 0.0351 | 0.4 | 0.0741 | 0.125 | 0.5832 | 0.6526 | 0.6159 | 0.0 | 0.0 | 0.0 | 0.7886 | 0.8364 | 0.8118 | 0.3333 | 0.0444 | 0.0784 | 0.4 | 0.08 | 0.1333 | 0.3198 | 0.3996 | 0.3553 | 0.0588 | 0.0769 | 0.0667 | 0.6910 | 0.7455 | 0.7172 | 0.1 | 0.0312 | 0.0476 | 0.0 | 0.0 | 0.0 | 0.2826 | 0.5099 |
| 0.5429 | 7.0 | 889 | 0.3776 | 0.4149 | 0.4594 | 0.4360 | 0.8795 | 0.3595 | 0.4367 | 0.3944 | 0.0 | 0.0 | 0.0 | 0.6949 | 0.7455 | 0.7193 | 0.125 | 0.0667 | 0.0870 | 0.3636 | 0.1481 | 0.2105 | 0.6094 | 0.6468 | 0.6276 | 0.0 | 0.0 | 0.0 | 0.8405 | 0.8303 | 0.8354 | 0.4167 | 0.1111 | 0.1754 | 0.4 | 0.16 | 0.2286 | 0.3443 | 0.4150 | 0.3764 | 0.0 | 0.0 | 0.0 | 0.7326 | 0.7636 | 0.7478 | 0.1905 | 0.125 | 0.1509 | 0.0 | 0.0 | 0.0 | 0.3142 | 0.5330 |
| 0.3019 | 8.0 | 1016 | 0.3892 | 0.4108 | 0.4657 | 0.4365 | 0.8781 | 0.3488 | 0.4404 | 0.3893 | 0.0 | 0.0 | 0.0 | 0.75 | 0.7636 | 0.7568 | 0.16 | 0.0889 | 0.1143 | 0.2727 | 0.1111 | 0.1579 | 0.5928 | 0.6679 | 0.6282 | 0.0 | 0.0 | 0.0 | 0.8625 | 0.8364 | 0.8492 | 0.4375 | 0.1556 | 0.2295 | 0.3 | 0.12 | 0.1714 | 0.3357 | 0.4172 | 0.3720 | 0.0 | 0.0 | 0.0 | 0.7875 | 0.7636 | 0.7754 | 0.1667 | 0.0938 | 0.1200 | 0.0 | 0.0 | 0.0 | 0.3146 | 0.5366 |
| 0.3019 | 9.0 | 1143 | 0.3872 | 0.4463 | 0.4719 | 0.4587 | 0.8845 | 0.3939 | 0.4495 | 0.4199 | 0.0 | 0.0 | 0.0 | 0.7530 | 0.7576 | 0.7553 | 0.1333 | 0.0889 | 0.1067 | 0.2667 | 0.1481 | 0.1905 | 0.6309 | 0.6430 | 0.6369 | 0.0 | 0.0 | 0.0 | 0.8571 | 0.8364 | 0.8466 | 0.4375 | 0.1556 | 0.2295 | 0.3077 | 0.16 | 0.2105 | 0.3893 | 0.4349 | 0.4108 | 0.0 | 0.0 | 0.0 | 0.7764 | 0.7576 | 0.7669 | 0.16 | 0.125 | 0.1404 | 0.6667 | 0.1 | 0.1739 | 0.3416 | 0.5540 |
| 0.3019 | 10.0 | 1270 | 0.3874 | 0.4370 | 0.4719 | 0.4538 | 0.8849 | 0.3917 | 0.4477 | 0.4178 | 0.0 | 0.0 | 0.0 | 0.7184 | 0.7576 | 0.7375 | 0.1389 | 0.1111 | 0.1235 | 0.2353 | 0.1481 | 0.1818 | 0.6259 | 0.6488 | 0.6371 | 0.0 | 0.0 | 0.0 | 0.8589 | 0.8485 | 0.8537 | 0.4 | 0.1778 | 0.2462 | 0.2667 | 0.16 | 0.2 | 0.3877 | 0.4305 | 0.4079 | 0.0 | 0.0 | 0.0 | 0.7456 | 0.7636 | 0.7545 | 0.1429 | 0.125 | 0.1333 | 0.5 | 0.1 | 0.1667 | 0.3399 | 0.5527 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Sayntz/google-play-sentiment-analysis
|
Sayntz
| 2023-12-06T14:35:07Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T14:29:39Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: google-play-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-play-sentiment-analysis
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3834
- Accuracy: 0.453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.3144 | 0.415 |
| No log | 2.0 | 250 | 1.3104 | 0.446 |
| No log | 3.0 | 375 | 1.3834 | 0.453 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/Tess-7B-v1.4-GPTQ
|
TheBloke
| 2023-12-06T14:31:46Z | 23 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:migtissera/Tess-7B-v1.4",
"base_model:quantized:migtissera/Tess-7B-v1.4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-06T14:02:38Z |
---
base_model: migtissera/Tess-7B-v1.4
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess 7B V1.4
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess 7B V1.4 - GPTQ
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess 7B V1.4](https://huggingface.co/migtissera/Tess-7B-v1.4)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Migel Tissera's Tess 7B V1.4](https://huggingface.co/migtissera/Tess-7B-v1.4).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-7B-v1.4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-7B-v1.4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Tess-7B-v1.4-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Tess-7B-v1.4-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Tess-7B-v1.4-GPTQ`:
```shell
mkdir Tess-7B-v1.4-GPTQ
huggingface-cli download TheBloke/Tess-7B-v1.4-GPTQ --local-dir Tess-7B-v1.4-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Tess-7B-v1.4-GPTQ
huggingface-cli download TheBloke/Tess-7B-v1.4-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Tess-7B-v1.4-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Tess-7B-v1.4-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-7B-v1.4-GPTQ --local-dir Tess-7B-v1.4-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Tess-7B-v1.4-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Tess-7B-v1.4-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Tess-7B-v1.4-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Tess-7B-v1.4-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Tess-7B-v1.4-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Migel Tissera's Tess 7B V1.4
# Tess

Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-XS-v1.4 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
marcelo-castro-cardoso/google-play-sentiment-analysis
|
marcelo-castro-cardoso
| 2023-12-06T14:30:46Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T14:29:41Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: google-play-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-play-sentiment-analysis
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3497
- Accuracy: 0.445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.3396 | 0.391 |
| No log | 2.0 | 250 | 1.2938 | 0.451 |
| No log | 3.0 | 375 | 1.3497 | 0.445 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
livingbox/minimalist-style-01-05
|
livingbox
| 2023-12-06T14:28:30Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T14:24:08Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### minimalist_style-01-05 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
mireiaplalis/albert-base-v2-finetuned-ner-cadec
|
mireiaplalis
| 2023-12-06T14:25:17Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"token-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-05T16:08:34Z |
---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner-cadec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner-cadec
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3782
- Precision: 0.6044
- Recall: 0.6542
- F1: 0.6283
- Accuracy: 0.9197
- Adr Precision: 0.5756
- Adr Recall: 0.6495
- Adr F1: 0.6103
- Disease Precision: 0.1923
- Disease Recall: 0.2632
- Disease F1: 0.2222
- Drug Precision: 0.9259
- Drug Recall: 0.9091
- Drug F1: 0.9174
- Finding Precision: 0.1667
- Finding Recall: 0.2
- Finding F1: 0.1818
- Symptom Precision: 0.6
- Symptom Recall: 0.2222
- Symptom F1: 0.3243
- B-adr Precision: 0.7331
- B-adr Recall: 0.7908
- B-adr F1: 0.7608
- B-disease Precision: 0.2778
- B-disease Recall: 0.2632
- B-disease F1: 0.2703
- B-drug Precision: 0.9630
- B-drug Recall: 0.9455
- B-drug F1: 0.9541
- B-finding Precision: 0.2391
- B-finding Recall: 0.2444
- B-finding F1: 0.2418
- B-symptom Precision: 0.75
- B-symptom Recall: 0.24
- B-symptom F1: 0.3636
- I-adr Precision: 0.5746
- I-adr Recall: 0.6524
- I-adr F1: 0.6110
- I-disease Precision: 0.2222
- I-disease Recall: 0.3077
- I-disease F1: 0.2581
- I-drug Precision: 0.9259
- I-drug Recall: 0.9202
- I-drug F1: 0.9231
- I-finding Precision: 0.1842
- I-finding Recall: 0.2188
- I-finding F1: 0.2000
- I-symptom Precision: 0.25
- I-symptom Recall: 0.0476
- I-symptom F1: 0.08
- Macro Avg F1: 0.4663
- Weighted Avg F1: 0.6990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
| No log | 1.0 | 127 | 0.2569 | 0.5268 | 0.6142 | 0.5671 | 0.9148 | 0.4666 | 0.6275 | 0.5352 | 0.0 | 0.0 | 0.0 | 0.8471 | 0.8727 | 0.8597 | 0.1935 | 0.1333 | 0.1579 | 0.0 | 0.0 | 0.0 | 0.6758 | 0.7601 | 0.7154 | 0.0 | 0.0 | 0.0 | 0.9157 | 0.9212 | 0.9184 | 0.3 | 0.0667 | 0.1091 | 0.0 | 0.0 | 0.0 | 0.4694 | 0.6411 | 0.5420 | 0.0 | 0.0 | 0.0 | 0.8683 | 0.8896 | 0.8788 | 0.2 | 0.1875 | 0.1935 | 0.0 | 0.0 | 0.0 | 0.3357 | 0.6349 |
| No log | 2.0 | 254 | 0.2418 | 0.5393 | 0.5993 | 0.5677 | 0.9159 | 0.5219 | 0.6110 | 0.5630 | 0.0645 | 0.1053 | 0.0800 | 0.8438 | 0.8182 | 0.8308 | 0.1379 | 0.1778 | 0.1553 | 0.6667 | 0.0741 | 0.1333 | 0.7396 | 0.7524 | 0.7460 | 0.1 | 0.1053 | 0.1026 | 0.9618 | 0.9152 | 0.9379 | 0.2093 | 0.2 | 0.2045 | 0.6667 | 0.08 | 0.1429 | 0.5226 | 0.6275 | 0.5703 | 0.0526 | 0.0769 | 0.0625 | 0.8491 | 0.8282 | 0.8385 | 0.25 | 0.3125 | 0.2778 | 0.0 | 0.0 | 0.0 | 0.3883 | 0.6615 |
| No log | 3.0 | 381 | 0.2577 | 0.6019 | 0.6380 | 0.6194 | 0.9226 | 0.5747 | 0.6422 | 0.6066 | 0.0909 | 0.1579 | 0.1154 | 0.9036 | 0.9091 | 0.9063 | 0.1579 | 0.1333 | 0.1446 | 0.6667 | 0.0741 | 0.1333 | 0.7598 | 0.7774 | 0.7685 | 0.2593 | 0.3684 | 0.3043 | 0.9455 | 0.9455 | 0.9455 | 0.2308 | 0.1333 | 0.1690 | 0.6667 | 0.08 | 0.1429 | 0.5881 | 0.6479 | 0.6165 | 0.0769 | 0.0769 | 0.0769 | 0.9091 | 0.9202 | 0.9146 | 0.2581 | 0.25 | 0.2540 | 0.0 | 0.0 | 0.0 | 0.4192 | 0.6943 |
| 0.2396 | 4.0 | 508 | 0.2655 | 0.6073 | 0.6429 | 0.6246 | 0.9200 | 0.5840 | 0.6440 | 0.6126 | 0.0 | 0.0 | 0.0 | 0.9012 | 0.8848 | 0.8930 | 0.2222 | 0.3111 | 0.2593 | 0.6667 | 0.1481 | 0.2424 | 0.7678 | 0.7678 | 0.7678 | 0.0 | 0.0 | 0.0 | 0.9689 | 0.9455 | 0.9571 | 0.2745 | 0.3111 | 0.2917 | 1.0 | 0.24 | 0.3871 | 0.5732 | 0.6185 | 0.5950 | 0.1 | 0.0769 | 0.0870 | 0.9068 | 0.8957 | 0.9012 | 0.2766 | 0.4062 | 0.3291 | 0.0 | 0.0 | 0.0 | 0.4316 | 0.6931 |
| 0.2396 | 5.0 | 635 | 0.2875 | 0.5769 | 0.6367 | 0.6053 | 0.9175 | 0.5669 | 0.6532 | 0.6070 | 0.1053 | 0.2105 | 0.1404 | 0.8598 | 0.8545 | 0.8571 | 0.1087 | 0.1111 | 0.1099 | 0.5 | 0.1481 | 0.2286 | 0.7319 | 0.7965 | 0.7629 | 0.2188 | 0.3684 | 0.2745 | 0.9627 | 0.9394 | 0.9509 | 0.1852 | 0.1111 | 0.1389 | 0.5714 | 0.16 | 0.25 | 0.5686 | 0.6546 | 0.6086 | 0.1053 | 0.1538 | 0.125 | 0.8650 | 0.8650 | 0.8650 | 0.2105 | 0.25 | 0.2286 | 0.0 | 0.0 | 0.0 | 0.4204 | 0.6853 |
| 0.2396 | 6.0 | 762 | 0.3081 | 0.6063 | 0.6442 | 0.6247 | 0.9188 | 0.5809 | 0.6459 | 0.6116 | 0.1923 | 0.2632 | 0.2222 | 0.8841 | 0.8788 | 0.8815 | 0.2 | 0.2222 | 0.2105 | 0.8 | 0.1481 | 0.25 | 0.7409 | 0.7793 | 0.7596 | 0.2381 | 0.2632 | 0.25 | 0.9571 | 0.9455 | 0.9512 | 0.2381 | 0.2222 | 0.2299 | 0.8 | 0.16 | 0.2667 | 0.5773 | 0.6659 | 0.6184 | 0.25 | 0.3077 | 0.2759 | 0.8896 | 0.8896 | 0.8896 | 0.2571 | 0.2812 | 0.2687 | 0.0 | 0.0 | 0.0 | 0.4510 | 0.6950 |
| 0.2396 | 7.0 | 889 | 0.3203 | 0.6147 | 0.6692 | 0.6408 | 0.9196 | 0.5903 | 0.6716 | 0.6283 | 0.15 | 0.1579 | 0.1538 | 0.8976 | 0.9030 | 0.9003 | 0.2182 | 0.2667 | 0.2400 | 0.5455 | 0.2222 | 0.3158 | 0.7442 | 0.7985 | 0.7704 | 0.2857 | 0.2105 | 0.2424 | 0.9398 | 0.9455 | 0.9426 | 0.2667 | 0.2667 | 0.2667 | 0.7778 | 0.28 | 0.4118 | 0.5783 | 0.6501 | 0.6121 | 0.1765 | 0.2308 | 0.2000 | 0.9085 | 0.9141 | 0.9113 | 0.2222 | 0.25 | 0.2353 | 0.6 | 0.1429 | 0.2308 | 0.4823 | 0.7039 |
| 0.0784 | 8.0 | 1016 | 0.3548 | 0.5995 | 0.6429 | 0.6205 | 0.9183 | 0.5783 | 0.6367 | 0.6061 | 0.15 | 0.1579 | 0.1538 | 0.8916 | 0.8970 | 0.8943 | 0.1875 | 0.2667 | 0.2202 | 0.5556 | 0.1852 | 0.2778 | 0.7454 | 0.7754 | 0.7601 | 0.2857 | 0.2105 | 0.2424 | 0.9455 | 0.9455 | 0.9455 | 0.2545 | 0.3111 | 0.2800 | 0.625 | 0.2 | 0.3030 | 0.5683 | 0.6388 | 0.6015 | 0.2 | 0.2308 | 0.2143 | 0.8970 | 0.9080 | 0.9024 | 0.1957 | 0.2812 | 0.2308 | 0.3333 | 0.0476 | 0.0833 | 0.4563 | 0.6927 |
| 0.0784 | 9.0 | 1143 | 0.3721 | 0.6101 | 0.6604 | 0.6343 | 0.9209 | 0.5812 | 0.6569 | 0.6167 | 0.25 | 0.2632 | 0.2564 | 0.9202 | 0.9091 | 0.9146 | 0.1964 | 0.2444 | 0.2178 | 0.4167 | 0.1852 | 0.2564 | 0.7300 | 0.7889 | 0.7583 | 0.3125 | 0.2632 | 0.2857 | 0.9630 | 0.9455 | 0.9541 | 0.2340 | 0.2444 | 0.2391 | 0.625 | 0.2 | 0.3030 | 0.5828 | 0.6591 | 0.6186 | 0.2857 | 0.3077 | 0.2963 | 0.9259 | 0.9202 | 0.9231 | 0.2432 | 0.2812 | 0.2609 | 0.3333 | 0.0952 | 0.1481 | 0.4787 | 0.7022 |
| 0.0784 | 10.0 | 1270 | 0.3782 | 0.6044 | 0.6542 | 0.6283 | 0.9197 | 0.5756 | 0.6495 | 0.6103 | 0.1923 | 0.2632 | 0.2222 | 0.9259 | 0.9091 | 0.9174 | 0.1667 | 0.2 | 0.1818 | 0.6 | 0.2222 | 0.3243 | 0.7331 | 0.7908 | 0.7608 | 0.2778 | 0.2632 | 0.2703 | 0.9630 | 0.9455 | 0.9541 | 0.2391 | 0.2444 | 0.2418 | 0.75 | 0.24 | 0.3636 | 0.5746 | 0.6524 | 0.6110 | 0.2222 | 0.3077 | 0.2581 | 0.9259 | 0.9202 | 0.9231 | 0.1842 | 0.2188 | 0.2000 | 0.25 | 0.0476 | 0.08 | 0.4663 | 0.6990 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mireiaplalis/distilbert-base-uncased-finetuned-ner-cadec
|
mireiaplalis
| 2023-12-06T14:24:43Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-05T16:17:08Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncasedfinetuned-ner-cadec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-cadec
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4315
- Precision: 0.3648
- Recall: 0.3333
- F1: 0.3483
- Accuracy: 0.8708
- Adr Precision: 0.2667
- Adr Recall: 0.2716
- Adr F1: 0.2691
- Disease Precision: 0.0
- Disease Recall: 0.0
- Disease F1: 0.0
- Drug Precision: 0.7483
- Drug Recall: 0.6848
- Drug F1: 0.7152
- Finding Precision: 0.25
- Finding Recall: 0.0222
- Finding F1: 0.0408
- Symptom Precision: 0.3333
- Symptom Recall: 0.1852
- Symptom F1: 0.2381
- B-adr Precision: 0.5655
- B-adr Recall: 0.4472
- B-adr F1: 0.4995
- B-disease Precision: 0.0
- B-disease Recall: 0.0
- B-disease F1: 0.0
- B-drug Precision: 0.8768
- B-drug Recall: 0.7333
- B-drug F1: 0.7987
- B-finding Precision: 1.0
- B-finding Recall: 0.0222
- B-finding F1: 0.0435
- B-symptom Precision: 0.4167
- B-symptom Recall: 0.2
- B-symptom F1: 0.2703
- I-adr Precision: 0.2236
- I-adr Recall: 0.2009
- I-adr F1: 0.2117
- I-disease Precision: 0.0
- I-disease Recall: 0.0
- I-disease F1: 0.0
- I-drug Precision: 0.8
- I-drug Recall: 0.6871
- I-drug F1: 0.7393
- I-finding Precision: 0.0
- I-finding Recall: 0.0
- I-finding F1: 0.0
- I-symptom Precision: 0.0
- I-symptom Recall: 0.0
- I-symptom F1: 0.0
- Macro Avg F1: 0.2563
- Weighted Avg F1: 0.4250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
| No log | 1.0 | 127 | 0.5842 | 0.3290 | 0.1573 | 0.2128 | 0.8274 | 0.1197 | 0.0624 | 0.0820 | 0.0 | 0.0 | 0.0 | 0.9293 | 0.5576 | 0.6970 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4157 | 0.0710 | 0.1213 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0667 | 0.125 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0383 | 0.0203 | 0.0265 | 0.0 | 0.0 | 0.0 | 0.1212 | 0.0736 | 0.0916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0364 | 0.0764 |
| No log | 2.0 | 254 | 0.5137 | 0.2948 | 0.1910 | 0.2318 | 0.8476 | 0.1337 | 0.1028 | 0.1162 | 0.0 | 0.0 | 0.0 | 0.97 | 0.5879 | 0.7321 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4724 | 0.2303 | 0.3097 | 0.0 | 0.0 | 0.0 | 0.9899 | 0.5939 | 0.7424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1053 | 0.0587 | 0.0754 | 0.0 | 0.0 | 0.0 | 0.97 | 0.5951 | 0.7376 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1865 | 0.3023 |
| No log | 3.0 | 381 | 0.4805 | 0.3057 | 0.2022 | 0.2434 | 0.8500 | 0.1487 | 0.1138 | 0.1289 | 0.0 | 0.0 | 0.0 | 0.8850 | 0.6061 | 0.7194 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5417 | 0.1747 | 0.2642 | 0.0 | 0.0 | 0.0 | 0.9623 | 0.6182 | 0.7528 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0851 | 0.0632 | 0.0725 | 0.0 | 0.0 | 0.0 | 0.9252 | 0.6074 | 0.7333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1823 | 0.2858 |
| 0.5965 | 4.0 | 508 | 0.4963 | 0.3717 | 0.2260 | 0.2811 | 0.8570 | 0.2174 | 0.1468 | 0.1752 | 0.0 | 0.0 | 0.0 | 0.8487 | 0.6121 | 0.7113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5853 | 0.2438 | 0.3442 | 0.0 | 0.0 | 0.0 | 0.9630 | 0.6303 | 0.7619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1641 | 0.0948 | 0.1202 | 0.0 | 0.0 | 0.0 | 0.9107 | 0.6258 | 0.7418 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1968 | 0.3312 |
| 0.5965 | 5.0 | 635 | 0.4448 | 0.3323 | 0.2759 | 0.3015 | 0.8619 | 0.2239 | 0.2165 | 0.2201 | 0.0 | 0.0 | 0.0 | 0.7574 | 0.6242 | 0.6844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5827 | 0.3109 | 0.4055 | 0.0 | 0.0 | 0.0 | 0.9561 | 0.6606 | 0.7814 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1314 | 0.1219 | 0.1265 | 0.0 | 0.0 | 0.0 | 0.8031 | 0.6258 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2017 | 0.3531 |
| 0.5965 | 6.0 | 762 | 0.4285 | 0.3672 | 0.3021 | 0.3315 | 0.8682 | 0.2553 | 0.2440 | 0.2495 | 0.0 | 0.0 | 0.0 | 0.7970 | 0.6424 | 0.7114 | 0.0 | 0.0 | 0.0 | 0.6 | 0.1111 | 0.1875 | 0.6210 | 0.3743 | 0.4671 | 0.0 | 0.0 | 0.0 | 0.9569 | 0.6727 | 0.7900 | 0.0 | 0.0 | 0.0 | 0.6 | 0.12 | 0.2000 | 0.1641 | 0.1445 | 0.1537 | 0.0 | 0.0 | 0.0 | 0.8254 | 0.6380 | 0.7197 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2330 | 0.3898 |
| 0.5965 | 7.0 | 889 | 0.4268 | 0.3582 | 0.3233 | 0.3399 | 0.8701 | 0.2531 | 0.2624 | 0.2577 | 0.0 | 0.0 | 0.0 | 0.7832 | 0.6788 | 0.7273 | 0.0 | 0.0 | 0.0 | 0.4444 | 0.1481 | 0.2222 | 0.5707 | 0.4338 | 0.4929 | 0.0 | 0.0 | 0.0 | 0.9268 | 0.6909 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.5714 | 0.16 | 0.25 | 0.1932 | 0.1783 | 0.1854 | 0.0 | 0.0 | 0.0 | 0.7698 | 0.6564 | 0.7086 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2429 | 0.4087 |
| 0.3701 | 8.0 | 1016 | 0.4297 | 0.3723 | 0.3221 | 0.3454 | 0.8725 | 0.2606 | 0.2606 | 0.2606 | 0.0 | 0.0 | 0.0 | 0.8175 | 0.6788 | 0.7417 | 0.0 | 0.0 | 0.0 | 0.5 | 0.1481 | 0.2286 | 0.5805 | 0.4223 | 0.4889 | 0.0 | 0.0 | 0.0 | 0.9370 | 0.7212 | 0.8151 | 0.0 | 0.0 | 0.0 | 0.5714 | 0.16 | 0.25 | 0.2020 | 0.1828 | 0.1919 | 0.0 | 0.0 | 0.0 | 0.8358 | 0.6871 | 0.7542 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2500 | 0.4170 |
| 0.3701 | 9.0 | 1143 | 0.4345 | 0.3688 | 0.3333 | 0.3502 | 0.8717 | 0.2648 | 0.2716 | 0.2681 | 0.0 | 0.0 | 0.0 | 0.7958 | 0.6848 | 0.7362 | 0.3333 | 0.0222 | 0.0417 | 0.3571 | 0.1852 | 0.2439 | 0.5566 | 0.4530 | 0.4995 | 0.0 | 0.0 | 0.0 | 0.9023 | 0.7273 | 0.8054 | 1.0 | 0.0222 | 0.0435 | 0.4167 | 0.2 | 0.2703 | 0.2173 | 0.1874 | 0.2012 | 0.0 | 0.0 | 0.0 | 0.8235 | 0.6871 | 0.7492 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2569 | 0.4237 |
| 0.3701 | 10.0 | 1270 | 0.4315 | 0.3648 | 0.3333 | 0.3483 | 0.8708 | 0.2667 | 0.2716 | 0.2691 | 0.0 | 0.0 | 0.0 | 0.7483 | 0.6848 | 0.7152 | 0.25 | 0.0222 | 0.0408 | 0.3333 | 0.1852 | 0.2381 | 0.5655 | 0.4472 | 0.4995 | 0.0 | 0.0 | 0.0 | 0.8768 | 0.7333 | 0.7987 | 1.0 | 0.0222 | 0.0435 | 0.4167 | 0.2 | 0.2703 | 0.2236 | 0.2009 | 0.2117 | 0.0 | 0.0 | 0.0 | 0.8 | 0.6871 | 0.7393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2563 | 0.4250 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/Tess-7B-v1.4-GGUF
|
TheBloke
| 2023-12-06T14:07:16Z | 43 | 4 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:migtissera/Tess-7B-v1.4",
"base_model:quantized:migtissera/Tess-7B-v1.4",
"license:apache-2.0",
"region:us"
] | null | 2023-12-06T14:02:38Z |
---
base_model: migtissera/Tess-7B-v1.4
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess 7B V1.4
model_type: mistral
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess 7B V1.4 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess 7B V1.4](https://huggingface.co/migtissera/Tess-7B-v1.4)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess 7B V1.4](https://huggingface.co/migtissera/Tess-7B-v1.4).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-7B-v1.4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-7B-v1.4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-7B-v1.4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-7b-v1.4.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [tess-7b-v1.4.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [tess-7b-v1.4.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [tess-7b-v1.4.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [tess-7b-v1.4.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-7b-v1.4.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [tess-7b-v1.4.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [tess-7b-v1.4.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-7b-v1.4.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [tess-7b-v1.4.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [tess-7b-v1.4.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [tess-7b-v1.4.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-7B-v1.4-GGUF/blob/main/tess-7b-v1.4.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-7B-v1.4-GGUF and below it, a specific filename to download, such as: tess-7b-v1.4.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-7B-v1.4-GGUF tess-7b-v1.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-7B-v1.4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-7B-v1.4-GGUF tess-7b-v1.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tess-7b-v1.4.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tess-7b-v1.4.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tess-7b-v1.4.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess 7B V1.4
# Tess

Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-XS-v1.4 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<!-- original-model-card end -->
|
LarryAIDraw/degenbrecher_arknights
|
LarryAIDraw
| 2023-12-06T13:59:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-06T13:58:20Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/125736/degenbrecher-arknights
|
Fenyan/ppo-LunarLander-v2
|
Fenyan
| 2023-12-06T13:55:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-06T13:55:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.48 +/- 18.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CoderCowMoo/HAL-9000-RVC-v2
|
CoderCowMoo
| 2023-12-06T13:48:47Z | 0 | 1 | null |
[
"en",
"region:us"
] | null | 2023-12-06T13:32:37Z |
---
language:
- en
---
Trained for 250 epochs, in the official UI, on 8 minutes of training data, cut from the movie.
|
nrshoudi/hubert_arabic_mdd_5
|
nrshoudi
| 2023-12-06T13:42:01Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/hubert-large-ls960-ft",
"base_model:finetune:facebook/hubert-large-ls960-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-06T13:41:10Z |
---
license: apache-2.0
base_model: facebook/hubert-large-ls960-ft
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hubert_arabic_mdd_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_arabic_mdd_5
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6845
- Wer: 0.1910
- Per: 0.1682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Per |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 21.0868 | 1.0 | 82 | 3.5829 | 1.0 | 1.0 |
| 3.5419 | 2.0 | 164 | 3.3544 | 1.0 | 1.0 |
| 3.3109 | 3.0 | 246 | 3.1878 | 1.0 | 1.0 |
| 3.2117 | 4.0 | 328 | 3.1324 | 1.0 | 1.0 |
| 3.0509 | 5.0 | 410 | 2.9238 | 1.0 | 1.0 |
| 2.7904 | 6.0 | 492 | 2.4830 | 1.0 | 1.0 |
| 2.2728 | 7.0 | 574 | 2.0478 | 0.8358 | 0.8474 |
| 1.8365 | 8.0 | 656 | 1.6043 | 0.7094 | 0.7206 |
| 1.4603 | 9.0 | 738 | 1.2872 | 0.6193 | 0.6247 |
| 1.2203 | 10.0 | 820 | 1.2694 | 0.4343 | 0.4229 |
| 0.9726 | 11.0 | 902 | 0.9350 | 0.4191 | 0.4116 |
| 0.8458 | 12.0 | 984 | 0.8471 | 0.3734 | 0.3640 |
| 0.7365 | 13.0 | 1066 | 0.8078 | 0.3530 | 0.3439 |
| 0.6204 | 14.0 | 1148 | 0.7618 | 0.3031 | 0.2874 |
| 0.5913 | 15.0 | 1230 | 0.7372 | 0.2598 | 0.2382 |
| 0.5217 | 16.0 | 1312 | 0.7219 | 0.2354 | 0.2149 |
| 0.4866 | 17.0 | 1394 | 0.7125 | 0.2071 | 0.1836 |
| 0.4735 | 18.0 | 1476 | 0.7071 | 0.1967 | 0.1742 |
| 0.462 | 19.0 | 1558 | 0.6910 | 0.1910 | 0.1677 |
| 0.458 | 20.0 | 1640 | 0.6845 | 0.1910 | 0.1682 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
NathanJLee/NLP2_Finetune_1e-4
|
NathanJLee
| 2023-12-06T13:39:16Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:NathanJLee/NLP2_Base_1e-4",
"base_model:finetune:NathanJLee/NLP2_Base_1e-4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T13:31:34Z |
---
license: apache-2.0
base_model: NathanJLee/NLP2_Base_1e-4
tags:
- generated_from_trainer
model-index:
- name: NLP2_Finetune_1e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP2_Finetune_1e-4
This model is a fine-tuned version of [NathanJLee/NLP2_Base_1e-4](https://huggingface.co/NathanJLee/NLP2_Base_1e-4) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
VRT-2428211/NLP2_Base_5e-5_Fixed
|
VRT-2428211
| 2023-12-06T13:20:51Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T12:29:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: NLP2_Base_5e-5_Fixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP2_Base_5e-5_Fixed
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
birdhouse5/textual_inversion_naive_artstyle_Ac
|
birdhouse5
| 2023-12-06T13:14:42Z | 9 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T11:07:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - birdhouse5/textual_inversion_naive_artstyle
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Formid322/gooroomee
|
Formid322
| 2023-12-06T13:07:42Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T13:04:04Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
kuhess/recurrent-ppo-LunarLander-v2
|
kuhess
| 2023-12-06T12:45:02Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-06T12:40:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: Recurrent-PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.21 +/- 13.59
name: mean_reward
verified: false
---
# **Recurrent-PPO** Agent playing **LunarLander-v2**
This is a trained model of a **Recurrent-PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BreadAi/gpt-Youtube
|
BreadAi
| 2023-12-06T12:44:54Z | 1,415 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:breadlicker45/youtube-comments-180k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-23T02:47:05Z |
---
datasets:
- breadlicker45/youtube-comments-180k
pipeline_tag: text-generation
---
this is trained on 180K YouTube comments.
this is trained for 100k steps.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BreadAi__gpt-Youtube)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.86 |
| ARC (25-shot) | 23.29 |
| HellaSwag (10-shot) | 26.34 |
| MMLU (5-shot) | 23.54 |
| TruthfulQA (0-shot) | 48.63 |
| Winogrande (5-shot) | 48.93 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 3.32 |
|
ZY-Dijkstra/cat
|
ZY-Dijkstra
| 2023-12-06T12:11:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T10:18:27Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ZY-Dijkstra/cat
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
multimodalart/politurbo3
|
multimodalart
| 2023-12-06T12:09:51Z | 6 | 3 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-06T10:06:48Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/politurbo3
<Gallery />
## Model description
### These are multimodalart/politurbo3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/politurbo3', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/politurbo3', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/politurbo3/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/politurbo3/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/politurbo3/tree/main).
|
personal1802/39
|
personal1802
| 2023-12-06T12:08:02Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-12-06T12:04:36Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/WHITE.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# fiamixRebootHNSFW_v41
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/personal1802/39/tree/main) them in the Files & versions tab.
|
nrshoudi/hubert_arabic_mdd
|
nrshoudi
| 2023-12-06T11:55:26Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/hubert-large-ls960-ft",
"base_model:finetune:facebook/hubert-large-ls960-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-07T07:31:01Z |
---
license: apache-2.0
base_model: facebook/hubert-large-ls960-ft
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hubert_arabic_mdd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_arabic_mdd
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5363
- Wer: 0.0490
- Per: 0.0376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Per |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.371 | 1.0 | 818 | 1.3106 | 0.6828 | 0.6863 |
| 0.9013 | 2.0 | 1636 | 0.4890 | 0.1144 | 0.0985 |
| 0.3369 | 3.0 | 2454 | 0.4295 | 0.0766 | 0.0611 |
| 0.2219 | 4.0 | 3272 | 0.4386 | 0.0649 | 0.0508 |
| 0.1565 | 5.0 | 4090 | 0.4124 | 0.0660 | 0.0506 |
| 0.1343 | 6.0 | 4908 | 0.4422 | 0.0630 | 0.0493 |
| 0.106 | 7.0 | 5726 | 0.4762 | 0.0600 | 0.0469 |
| 0.091 | 8.0 | 6544 | 0.4487 | 0.0580 | 0.0460 |
| 0.0745 | 9.0 | 7362 | 0.4284 | 0.0577 | 0.0461 |
| 0.0708 | 10.0 | 8180 | 0.4161 | 0.0578 | 0.0451 |
| 0.0621 | 11.0 | 8998 | 0.4659 | 0.0535 | 0.0414 |
| 0.0492 | 12.0 | 9816 | 0.5249 | 0.0557 | 0.0433 |
| 0.0479 | 13.0 | 10634 | 0.5411 | 0.0550 | 0.0426 |
| 0.0452 | 14.0 | 11452 | 0.5161 | 0.0536 | 0.0410 |
| 0.0385 | 15.0 | 12270 | 0.5002 | 0.0521 | 0.0404 |
| 0.0354 | 16.0 | 13088 | 0.4800 | 0.0499 | 0.0389 |
| 0.0342 | 17.0 | 13906 | 0.5079 | 0.0506 | 0.0394 |
| 0.0269 | 18.0 | 14724 | 0.5144 | 0.0499 | 0.0386 |
| 0.0247 | 19.0 | 15542 | 0.5334 | 0.0496 | 0.0380 |
| 0.0208 | 20.0 | 16360 | 0.5363 | 0.0490 | 0.0376 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nisharganirjan/Llama2Tuning-odia_llama2_7B_base
|
nisharganirjan
| 2023-12-06T11:49:14Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2023-12-06T11:49:08Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
zhijian12345/marian-finetuned-kde4-en-to-zh_CN
|
zhijian12345
| 2023-12-06T11:48:19Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-06T10:58:12Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-zh
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-zh_CN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_CN
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Dhruvil47/falcon-7b-bioarxiv
|
Dhruvil47
| 2023-12-06T11:46:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2023-12-06T11:10:01Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "Dhruvil47/falcon-7b-bioarxiv"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"NKX2-1 works through",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow51
|
FounderOfHuggingface
| 2023-12-06T11:46:31Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:46:29Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow49
|
FounderOfHuggingface
| 2023-12-06T11:46:22Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:46:20Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow45
|
FounderOfHuggingface
| 2023-12-06T11:46:04Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:46:03Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow40
|
FounderOfHuggingface
| 2023-12-06T11:45:42Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:45:41Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow33
|
FounderOfHuggingface
| 2023-12-06T11:45:10Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:45:08Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow31
|
FounderOfHuggingface
| 2023-12-06T11:44:58Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:44:55Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow29
|
FounderOfHuggingface
| 2023-12-06T11:44:49Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:44:47Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow25
|
FounderOfHuggingface
| 2023-12-06T11:44:32Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:44:31Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow20
|
FounderOfHuggingface
| 2023-12-06T11:44:11Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:44:09Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow17
|
FounderOfHuggingface
| 2023-12-06T11:43:59Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:57Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow16
|
FounderOfHuggingface
| 2023-12-06T11:43:54Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:53Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow15
|
FounderOfHuggingface
| 2023-12-06T11:43:50Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:49Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow14
|
FounderOfHuggingface
| 2023-12-06T11:43:46Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:45Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow12
|
FounderOfHuggingface
| 2023-12-06T11:43:38Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:36Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow11
|
FounderOfHuggingface
| 2023-12-06T11:43:34Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:31Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow9
|
FounderOfHuggingface
| 2023-12-06T11:43:23Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:21Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow8
|
FounderOfHuggingface
| 2023-12-06T11:43:19Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:16Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow6
|
FounderOfHuggingface
| 2023-12-06T11:43:09Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:07Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow5
|
FounderOfHuggingface
| 2023-12-06T11:43:05Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:43:03Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow2
|
FounderOfHuggingface
| 2023-12-06T11:42:52Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:42:50Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow1
|
FounderOfHuggingface
| 2023-12-06T11:42:47Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:42:45Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow0
|
FounderOfHuggingface
| 2023-12-06T11:42:42Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T11:42:39Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
livingbox/italian-style-new-v4
|
livingbox
| 2023-12-06T11:07:06Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T11:03:16Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Italian_Style_new.V4 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
athirdpath/Iambe-20b-v3-RP-DPO-DELAYED
|
athirdpath
| 2023-12-06T11:01:49Z | 0 | 2 | null |
[
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-12-06T10:47:39Z |
---
license: cc-by-nc-4.0
language:
- en
---
<p align="center"><img src="https://i.ibb.co/zxwNXRC/2mhjis.jpg"/><font size="5">Should've used cDPO...</font></p>
<p align="center"><font size="4"> <i>Read too much, decided to use RSO instead, things were great, walked away... zero loss and noisy as hell. Starting over tomorrow.</i></font></p>
|
gyr66/chinese-roberta-wwm-ext-large-lora-ner
|
gyr66
| 2023-12-06T10:53:18Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"zh",
"dataset:gyr66/privacy_detection",
"license:apache-2.0",
"region:us"
] | null | 2023-12-06T08:01:00Z |
---
language:
- zh
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- gyr66/privacy_detection
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: chinese-roberta-wwm-ext-large-lora-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-roberta-wwm-ext-large-lora-ner
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on the gyr66/privacy_detection dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3302
- Precision: 0.6010
- Recall: 0.7258
- F1: 0.6575
- Accuracy: 0.9106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 28
- eval_batch_size: 56
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.729 | 1.0 | 72 | 0.3562 | 0.4495 | 0.5818 | 0.5072 | 0.8865 |
| 0.3155 | 2.0 | 144 | 0.3243 | 0.5155 | 0.6636 | 0.5803 | 0.8972 |
| 0.2561 | 3.0 | 216 | 0.3021 | 0.5599 | 0.7004 | 0.6223 | 0.9067 |
| 0.2283 | 4.0 | 288 | 0.3049 | 0.5670 | 0.6984 | 0.6259 | 0.9044 |
| 0.1952 | 5.0 | 360 | 0.3144 | 0.5836 | 0.7145 | 0.6424 | 0.9076 |
| 0.174 | 6.0 | 432 | 0.3157 | 0.5787 | 0.7183 | 0.6410 | 0.9063 |
| 0.155 | 7.0 | 504 | 0.3223 | 0.5966 | 0.7246 | 0.6544 | 0.9083 |
| 0.1436 | 8.0 | 576 | 0.3267 | 0.5921 | 0.7210 | 0.6502 | 0.9088 |
| 0.1298 | 9.0 | 648 | 0.3345 | 0.5965 | 0.7276 | 0.6556 | 0.9089 |
| 0.1226 | 10.0 | 720 | 0.3302 | 0.6010 | 0.7258 | 0.6575 | 0.9106 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.2
|
chethu/distilbert-base-uncased-finetuned-emotion
|
chethu
| 2023-12-06T10:44:28Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-06T10:28:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8019 | 1.0 | 250 | 0.3192 | 0.906 | 0.9053 |
| 0.2502 | 2.0 | 500 | 0.2098 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ
|
mobiuslabsgmbh
| 2023-12-06T10:40:48Z | 28 | 4 |
transformers
|
[
"transformers",
"image-classification",
"license:mit",
"region:us"
] |
image-classification
| 2023-11-22T10:23:44Z |
---
license: mit
train: false
inference: false
pipeline_tag: image-classification
---
## CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ
This is a version of the ViT-H-14 vision model based on timm's ```vit_huge_patch14_clip_224.laion2b``` quantized to 2-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq_blog/
This 2-bit model achieves a 0.716 zero-shot top-1 accuracy on Imagenet, outperforming a full-precision ViT-B-32 (0.664).
### Basic Usage
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
``` Python
from hqq.engine.timm import HQQtimm
model = HQQtimm.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
```
### Zero-Shot Classification
For zero-shot classification you'd need the text model as well, here's a complete example:
``` Python
!pip install open_clip_torch
!pip install Pillow
import torch
import numpy as np
import open_clip
orig_model, _ , preprocess = open_clip.create_model_and_transforms('ViT-H-14', pretrained='laion2B-s32B-b79K')
tokenizer = open_clip.get_tokenizer('ViT-H-14')
model_text = orig_model.encode_text
from hqq.engine.timm import HQQtimm
model_visual = HQQtimm.from_quantized("mobiuslabsgmbh/CLIP-ViT-H-14-laion2B-2bit_g16_s128-HQQ")
###############################################################
#Add your own templates here, we provide simple ones below.
#https://github.com/openai/CLIP/blob/main/data/prompts.md for the complete list
TEMPLATES = (
lambda c: f'itap of a {c}.',
lambda c: f'a origami {c}.',
lambda c: f'a bad photo of the {c}.',
lambda c: f'a photo of the large {c}.',
lambda c: f'a photo of the small {c}.',
lambda c: f'a {c} in a video game.',
lambda c: f'art of the {c}.',
)
@torch.no_grad()
def forward_image(img):
x = preprocess(img).unsqueeze(0)
f = model_visual(x.half().cuda())
f /= torch.norm(f, p=2, dim=-1, keepdim=True)
return f
@torch.no_grad()
def forward_text(text_batch_list, normalize=True):
inputs = tokenizer(text_batch_list)
f = model_text(inputs)
if(normalize):
f /= torch.norm(f, p=2, dim=-1, keepdim=True)
del inputs
return f.half().to('cuda')
def forward_text_with_templates(text, templates=TEMPLATES, normalize=True):
f = forward_text([t(text) for t in templates], normalize=False).mean(axis=0)
if(normalize):
f /= torch.norm(f, p=2, dim=-1, keepdim=True)
return f
def classifier_zero_shot_with_pil(img, classes):
classifiers = torch.cat([forward_text_with_templates(c).reshape([1, -1]) for c in classes], axis=0)
img_features = forward_image(img)
scores = torch.matmul(img_features, classifiers.T)[0].detach().cpu().numpy()
out = classes[np.argmax(scores)]
return out
###############################################################
from PIL import Image
import requests
#img_path_or_url = 'https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-looking-at-camera-1593184780.jpg' #Cat
#img_path_or_url = 'https://www.shutterstock.com/image-photo/photo-cute-golden-retriever-running-600nw-2291249193.jpg' #Dog
img_path_or_url = "https://my-sweet-usa.de/cdn/shop/products/1727.jpg" #bag of chips
img = Image.open(requests.get(img_path_or_url, stream=True).raw)
classes = ['cat', 'dog', 'car', 'tiger', 'bag of chips']
out = classifier_zero_shot_with_pil(img, classes)
print("It's a picture of a " + out) #It's a picture of a bag of chips
```
*Limitations*: <br>
-Only supports single GPU runtime.<br>
-Doesn't support finetuning the linear layers.<br>
|
AbhiPemmaraju/finetuned_LAMA2
|
AbhiPemmaraju
| 2023-12-06T10:32:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T09:00:43Z |
This model is fine-tuned on AESLC dataset which was used for subject line generation task.
I have finetuned LLM using PEFT, QLORA, SFTT.
This model can be run on kaggle notebook
|
Siphh/wabLab2
|
Siphh
| 2023-12-06T10:28:27Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-03T12:13:52Z |
---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_11_0
language:
- sv
metrics:
- wer
---
|
Anant58/ReinforcePixelcopter
|
Anant58
| 2023-12-06T10:23:49Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-06T10:23:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: ReinforcePixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.30 +/- 14.91
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nakkati/baseline_with_pl_txt_encoder2
|
nakkati
| 2023-12-06T10:01:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-06T07:34:27Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of Luffy, the pirate with a straw hat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - nakkati/baseline_with_pl_txt_encoder2
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of Luffy, the pirate with a straw hat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
TheBloke/NeuralOrca-7B-v1-AWQ
|
TheBloke
| 2023-12-06T09:58:01Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:mrfakename/NeuralOrca-7B-v1",
"base_model:quantized:mrfakename/NeuralOrca-7B-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-12-06T09:39:35Z |
---
base_model: mrfakename/NeuralOrca-7B-v1
inference: false
license: apache-2.0
model_creator: mrfakename
model_name: NeuralOrca 7B V1
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NeuralOrca 7B V1 - AWQ
- Model creator: [mrfakename](https://huggingface.co/mrfakename)
- Original model: [NeuralOrca 7B V1](https://huggingface.co/mrfakename/NeuralOrca-7B-v1)
<!-- description start -->
## Description
This repo contains AWQ model files for [mrfakename's NeuralOrca 7B V1](https://huggingface.co/mrfakename/NeuralOrca-7B-v1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF)
* [mrfakename's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mrfakename/NeuralOrca-7B-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/NeuralOrca-7B-v1-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `NeuralOrca-7B-v1-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/NeuralOrca-7B-v1-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/NeuralOrca-7B-v1-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/NeuralOrca-7B-v1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/NeuralOrca-7B-v1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: mrfakename's NeuralOrca 7B V1
# NeuralOrca 7B V1
[GGUF Models](https://huggingface.co/mrfakename/NeuralOrca-7B-v1-gguf) ([Request](https://huggingface.co/mrfakename/NeuralOrca-7B-v1/discussions/1))
**By [mrfakename](https://twitter.com/realmrfakename)**
*Please note that this is an experimental model. We cannot guarantee model quality.*
This is the first (alpha) release of NeuralOrca. NeuralOrca is a merge of the following models:
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) (This model is actually [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) finetuned on Intel's [Neural Chat dataset](https://huggingface.co/datasets/Intel/neural-chat-dataset-v2) and uses the ChatML prompt format, weight: 1.0)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) (This model uses the ChatML prompt format, weight: 0.7)
## Prompt Format
We use the ChatML prompt format.
Example:
```
<|im_start|>system
You are NeuralOrca, a helpful AI assistant.
<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
```
## Evaluations
Coming soon
## Context Length
The context length for this model is 8192 tokens (8K).
## License
You are responsible for your use of NeuralOrca.
This software is licensed under the Apache 2.0 license. If you want to use it for commercial use, it's probably fine but please contact me first.
|
livingbox/italian-style-new-v3
|
livingbox
| 2023-12-06T09:54:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T09:50:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Italian_Style_new.V3 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pintt/serum
|
pintt
| 2023-12-06T09:47:50Z | 0 | 0 | null |
[
"medical",
"image-to-text",
"ar",
"dataset:HuggingFaceH4/no_robots",
"region:us"
] |
image-to-text
| 2023-12-06T09:46:09Z |
---
datasets:
- HuggingFaceH4/no_robots
language:
- ar
metrics:
- character
pipeline_tag: image-to-text
tags:
- medical
---
|
Weyaxi/MetaMath-una-cybertron-v2-bf16-Linear
|
Weyaxi
| 2023-12-06T09:47:33Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T09:41:41Z |
models:
- model: meta-math/MetaMath-Mistral-7B
parameters:
weight: 0.5
- model: fblgit/una-cybertron-7b-v2-bf16
parameters:
weight: 0.3
merge_method: linear
dtype: float16
|
TheBloke/NeuralOrca-7B-v1-GGUF
|
TheBloke
| 2023-12-06T09:44:46Z | 37 | 4 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:mrfakename/NeuralOrca-7B-v1",
"base_model:quantized:mrfakename/NeuralOrca-7B-v1",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2023-12-06T09:39:35Z |
---
base_model: mrfakename/NeuralOrca-7B-v1
inference: false
license: apache-2.0
model_creator: mrfakename
model_name: NeuralOrca 7B V1
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NeuralOrca 7B V1 - GGUF
- Model creator: [mrfakename](https://huggingface.co/mrfakename)
- Original model: [NeuralOrca 7B V1](https://huggingface.co/mrfakename/NeuralOrca-7B-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [mrfakename's NeuralOrca 7B V1](https://huggingface.co/mrfakename/NeuralOrca-7B-v1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF)
* [mrfakename's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mrfakename/NeuralOrca-7B-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [neuralorca-7b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [neuralorca-7b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [neuralorca-7b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [neuralorca-7b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [neuralorca-7b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [neuralorca-7b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [neuralorca-7b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [neuralorca-7b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [neuralorca-7b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [neuralorca-7b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [neuralorca-7b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [neuralorca-7b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/NeuralOrca-7B-v1-GGUF/blob/main/neuralorca-7b-v1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/NeuralOrca-7B-v1-GGUF and below it, a specific filename to download, such as: neuralorca-7b-v1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/NeuralOrca-7B-v1-GGUF neuralorca-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/NeuralOrca-7B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/NeuralOrca-7B-v1-GGUF neuralorca-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m neuralorca-7b-v1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./neuralorca-7b-v1.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./neuralorca-7b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: mrfakename's NeuralOrca 7B V1
# NeuralOrca 7B V1
[GGUF Models](https://huggingface.co/mrfakename/NeuralOrca-7B-v1-gguf) ([Request](https://huggingface.co/mrfakename/NeuralOrca-7B-v1/discussions/1))
**By [mrfakename](https://twitter.com/realmrfakename)**
*Please note that this is an experimental model. We cannot guarantee model quality.*
This is the first (alpha) release of NeuralOrca. NeuralOrca is a merge of the following models:
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) (This model is actually [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) finetuned on Intel's [Neural Chat dataset](https://huggingface.co/datasets/Intel/neural-chat-dataset-v2) and uses the ChatML prompt format, weight: 1.0)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) (This model uses the ChatML prompt format, weight: 0.7)
## Prompt Format
We use the ChatML prompt format.
Example:
```
<|im_start|>system
You are NeuralOrca, a helpful AI assistant.
<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
```
## Evaluations
Coming soon
## Context Length
The context length for this model is 8192 tokens (8K).
## License
You are responsible for your use of NeuralOrca.
This software is licensed under the Apache 2.0 license. If you want to use it for commercial use, it's probably fine but please contact me first.
<!-- original-model-card end -->
|
linqus/rl_course_vizdoom_health_gathering_supreme
|
linqus
| 2023-12-06T09:41:28Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-06T09:41:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.00 +/- 4.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r linqus/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
francesco11/rare-puppers
|
francesco11
| 2023-12-06T09:33:36Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T09:33:30Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8913043737411499
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Yorkshire Terrier

#### beagle

#### bull terrier

#### jack russel terrier

#### pitbull

|
aisuko/ft_bert_base_cased
|
aisuko
| 2023-12-06T09:33:00Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-01T22:49:56Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: ft_bert_base_cased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.566
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_bert_base_cased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0124
- Accuracy: 0.566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 1.1082 | 0.552 |
| No log | 2.0 | 126 | 1.0124 | 0.566 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.14.1
|
hkivancoral/smids_1x_deit_tiny_rms_00001_fold5
|
hkivancoral
| 2023-12-06T09:21:12Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T09:04:36Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_tiny_rms_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8683333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_tiny_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8945
- Accuracy: 0.8683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4595 | 1.0 | 75 | 0.3430 | 0.875 |
| 0.3075 | 2.0 | 150 | 0.3392 | 0.8667 |
| 0.1885 | 3.0 | 225 | 0.2939 | 0.8833 |
| 0.2401 | 4.0 | 300 | 0.3047 | 0.88 |
| 0.1913 | 5.0 | 375 | 0.2948 | 0.8833 |
| 0.0779 | 6.0 | 450 | 0.3840 | 0.8733 |
| 0.0659 | 7.0 | 525 | 0.3753 | 0.8867 |
| 0.0683 | 8.0 | 600 | 0.4413 | 0.88 |
| 0.0365 | 9.0 | 675 | 0.5012 | 0.875 |
| 0.0284 | 10.0 | 750 | 0.5436 | 0.8783 |
| 0.0339 | 11.0 | 825 | 0.6040 | 0.8883 |
| 0.0022 | 12.0 | 900 | 0.6211 | 0.8767 |
| 0.0421 | 13.0 | 975 | 0.7125 | 0.87 |
| 0.0295 | 14.0 | 1050 | 0.8588 | 0.8567 |
| 0.0522 | 15.0 | 1125 | 0.7259 | 0.875 |
| 0.0002 | 16.0 | 1200 | 0.7288 | 0.87 |
| 0.0239 | 17.0 | 1275 | 0.7745 | 0.865 |
| 0.0349 | 18.0 | 1350 | 0.7880 | 0.8667 |
| 0.0099 | 19.0 | 1425 | 0.7942 | 0.8683 |
| 0.0002 | 20.0 | 1500 | 0.8016 | 0.88 |
| 0.0181 | 21.0 | 1575 | 0.8187 | 0.87 |
| 0.0067 | 22.0 | 1650 | 0.8802 | 0.87 |
| 0.0135 | 23.0 | 1725 | 0.8554 | 0.865 |
| 0.0001 | 24.0 | 1800 | 0.8527 | 0.8733 |
| 0.0254 | 25.0 | 1875 | 0.7951 | 0.87 |
| 0.0001 | 26.0 | 1950 | 0.8033 | 0.87 |
| 0.0127 | 27.0 | 2025 | 0.8239 | 0.8783 |
| 0.0152 | 28.0 | 2100 | 0.9169 | 0.8683 |
| 0.019 | 29.0 | 2175 | 0.8220 | 0.8767 |
| 0.0038 | 30.0 | 2250 | 0.8836 | 0.8683 |
| 0.0001 | 31.0 | 2325 | 0.8686 | 0.875 |
| 0.0 | 32.0 | 2400 | 0.8778 | 0.8717 |
| 0.0121 | 33.0 | 2475 | 1.0279 | 0.8583 |
| 0.0001 | 34.0 | 2550 | 0.9813 | 0.8583 |
| 0.0072 | 35.0 | 2625 | 0.8647 | 0.8767 |
| 0.0 | 36.0 | 2700 | 0.8869 | 0.8683 |
| 0.0027 | 37.0 | 2775 | 0.9010 | 0.865 |
| 0.0 | 38.0 | 2850 | 0.9248 | 0.8633 |
| 0.0 | 39.0 | 2925 | 0.9337 | 0.8617 |
| 0.0 | 40.0 | 3000 | 0.8918 | 0.865 |
| 0.0054 | 41.0 | 3075 | 0.9181 | 0.865 |
| 0.0 | 42.0 | 3150 | 0.8813 | 0.8717 |
| 0.0078 | 43.0 | 3225 | 0.9054 | 0.8633 |
| 0.0 | 44.0 | 3300 | 0.8996 | 0.8667 |
| 0.0016 | 45.0 | 3375 | 0.9082 | 0.8633 |
| 0.0 | 46.0 | 3450 | 0.9083 | 0.865 |
| 0.0068 | 47.0 | 3525 | 0.9001 | 0.8683 |
| 0.0048 | 48.0 | 3600 | 0.8991 | 0.8683 |
| 0.0 | 49.0 | 3675 | 0.8976 | 0.8683 |
| 0.0029 | 50.0 | 3750 | 0.8945 | 0.8683 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Stevenhawkingneko/genshinbert20
|
Stevenhawkingneko
| 2023-12-06T09:18:54Z | 3 | 0 |
transformers
|
[
"transformers",
"zh",
"dataset:mmdjiji/bert-chinese-idioms",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-12-06T09:14:00Z |
---
license: mit
datasets:
- mmdjiji/bert-chinese-idioms
language:
- zh
metrics:
- character
---
|
smitpatl/nona-is-my-muse
|
smitpatl
| 2023-12-06T09:16:56Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-09T17:45:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Nona-is-my-muse Dreambooth model trained by smitpatl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
|
sinanoloji/en-to-es-trans-with-bart
|
sinanoloji
| 2023-12-06T09:16:04Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-06T08:08:41Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: en-to-es-trans-with-bart
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-es
split: train
args: en-es
metrics:
- name: Bleu
type: bleu
value: 2.1888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-to-es-trans-with-bart
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0355
- Bleu: 2.1888
- Gen Len: 18.6278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.4648 | 1.0 | 4674 | 2.1509 | 1.937 | 18.6399 |
| 2.2417 | 2.0 | 9348 | 2.0355 | 2.1888 | 18.6278 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_1x_deit_tiny_rms_00001_fold4
|
hkivancoral
| 2023-12-06T09:03:17Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T08:46:53Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_tiny_rms_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8583333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_tiny_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2549
- Accuracy: 0.8583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4697 | 1.0 | 75 | 0.4183 | 0.8233 |
| 0.3312 | 2.0 | 150 | 0.3681 | 0.8533 |
| 0.2321 | 3.0 | 225 | 0.4033 | 0.8517 |
| 0.1334 | 4.0 | 300 | 0.3968 | 0.8617 |
| 0.1233 | 5.0 | 375 | 0.4520 | 0.8567 |
| 0.0584 | 6.0 | 450 | 0.5293 | 0.8467 |
| 0.0835 | 7.0 | 525 | 0.5619 | 0.8533 |
| 0.0113 | 8.0 | 600 | 0.7080 | 0.8483 |
| 0.0326 | 9.0 | 675 | 0.7194 | 0.86 |
| 0.0108 | 10.0 | 750 | 0.7779 | 0.8583 |
| 0.0133 | 11.0 | 825 | 0.7881 | 0.8617 |
| 0.0052 | 12.0 | 900 | 0.8341 | 0.87 |
| 0.0272 | 13.0 | 975 | 0.8910 | 0.8617 |
| 0.0077 | 14.0 | 1050 | 0.9561 | 0.8433 |
| 0.0002 | 15.0 | 1125 | 0.9039 | 0.8617 |
| 0.0001 | 16.0 | 1200 | 0.9956 | 0.86 |
| 0.032 | 17.0 | 1275 | 0.9953 | 0.8667 |
| 0.018 | 18.0 | 1350 | 0.9816 | 0.8633 |
| 0.0282 | 19.0 | 1425 | 1.1776 | 0.8467 |
| 0.0002 | 20.0 | 1500 | 1.0796 | 0.8583 |
| 0.0001 | 21.0 | 1575 | 1.1308 | 0.8567 |
| 0.0001 | 22.0 | 1650 | 1.1869 | 0.8467 |
| 0.0001 | 23.0 | 1725 | 1.1953 | 0.86 |
| 0.0134 | 24.0 | 1800 | 1.1511 | 0.85 |
| 0.0197 | 25.0 | 1875 | 1.2279 | 0.8517 |
| 0.0 | 26.0 | 1950 | 1.2715 | 0.8483 |
| 0.0011 | 27.0 | 2025 | 1.2389 | 0.85 |
| 0.0034 | 28.0 | 2100 | 1.2470 | 0.85 |
| 0.0076 | 29.0 | 2175 | 1.1531 | 0.8617 |
| 0.0 | 30.0 | 2250 | 1.2325 | 0.85 |
| 0.0 | 31.0 | 2325 | 1.2009 | 0.8633 |
| 0.0 | 32.0 | 2400 | 1.2311 | 0.85 |
| 0.0 | 33.0 | 2475 | 1.2487 | 0.8583 |
| 0.0 | 34.0 | 2550 | 1.2363 | 0.8567 |
| 0.0 | 35.0 | 2625 | 1.2306 | 0.8567 |
| 0.0 | 36.0 | 2700 | 1.2366 | 0.86 |
| 0.0048 | 37.0 | 2775 | 1.2202 | 0.8567 |
| 0.0 | 38.0 | 2850 | 1.2263 | 0.86 |
| 0.0 | 39.0 | 2925 | 1.2319 | 0.8617 |
| 0.0 | 40.0 | 3000 | 1.2616 | 0.8533 |
| 0.0038 | 41.0 | 3075 | 1.2358 | 0.8583 |
| 0.0 | 42.0 | 3150 | 1.2473 | 0.8583 |
| 0.0 | 43.0 | 3225 | 1.2419 | 0.8567 |
| 0.0 | 44.0 | 3300 | 1.2543 | 0.8583 |
| 0.0 | 45.0 | 3375 | 1.2531 | 0.8567 |
| 0.0 | 46.0 | 3450 | 1.2531 | 0.8583 |
| 0.0 | 47.0 | 3525 | 1.2531 | 0.8583 |
| 0.0 | 48.0 | 3600 | 1.2543 | 0.8583 |
| 0.0 | 49.0 | 3675 | 1.2544 | 0.8583 |
| 0.0 | 50.0 | 3750 | 1.2549 | 0.8583 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
oostapeno/rsgd3_1B_repl_coarsegrained_poly_router_dir_lora_sim
|
oostapeno
| 2023-12-06T08:58:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-04T15:32:39Z |
Number of experts present in the library: 35
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| quoref_Find_Answer_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| adversarial_qa_dbidaf_generate_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| cos_e_v1_11_question_description_option_text_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| yelp_polarity_reviews_0_2_0 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| ai2_arc_ARC_Challenge_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| wiqa_what_might_be_the_first_step_of_the_process | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| dream_baseline | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| duorc_ParaphraseRC_answer_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| web_questions_whats_the_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| wiqa_effect_with_string_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| quoref_Find_Answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| quartz_read_passage_below_choose | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dbpedia_14_given_a_choice_of_categories__v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| cos_e_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| wiqa_effect_with_string_answer_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| squad_v1_1_3_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| duorc_SelfRC_answer_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| squad_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| wiqa_effect_with_string_answer_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| adversarial_qa_dbidaf_generate_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| dream_baseline_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
Last updated on: 2023-12-05 22:51:37+00:00
|
livingbox/minimalistic-test-01
|
livingbox
| 2023-12-06T08:57:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-06T08:53:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### minimalistic-test-01 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ise-uiuc/Magicoder-S-CL-7B
|
ise-uiuc
| 2023-12-06T08:52:33Z | 170 | 44 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"arxiv:2312.02120",
"arxiv:2305.06161",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T19:21:03Z |
---
license: llama2
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
library_name: transformers
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.


## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [Llama 2](https://ai.meta.com/llama/license/)
* **Finetuned from model:** [CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
* [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-S-CL-7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|
ise-uiuc/Magicoder-CL-7B
|
ise-uiuc
| 2023-12-06T08:52:27Z | 95 | 21 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"arxiv:2312.02120",
"arxiv:2305.06161",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T18:59:49Z |
---
license: llama2
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
library_name: transformers
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.


## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [Llama 2](https://ai.meta.com/llama/license/)
* **Finetuned from model:** [CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-CL-7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow19
|
FounderOfHuggingface
| 2023-12-06T08:48:30Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T08:48:27Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow16
|
FounderOfHuggingface
| 2023-12-06T08:48:12Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T08:48:06Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
sinonimayzer/UzRoBERTa-v1
|
sinonimayzer
| 2023-12-06T08:33:27Z | 101 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"uz",
"dataset:sinonimayzer/mixed-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-29T08:29:02Z |
---
widget:
- text: Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi.
example_title: Example 1
- text: >-
Shu munosabat bilan O‘zbekiston Prezidenti global inqiroz sharoitida savdo-iqtisodiy hamkorlikni <mask> va hududlararo aloqalarni rivojlantirishning muhim masalalariga to‘xtalib o‘tdi.
example_title: Example 2
datasets:
- sinonimayzer/mixed-data
language:
- uz
pipeline_tag: fill-mask
tags:
- generated_from_trainer
model-index:
- name: roberta-v1
results: []
---
|
hkivancoral/smids_1x_deit_tiny_rms_00001_fold2
|
hkivancoral
| 2023-12-06T08:27:17Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T08:10:09Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_tiny_rms_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8851913477537438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_tiny_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8725
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4335 | 1.0 | 75 | 0.3629 | 0.8519 |
| 0.2445 | 2.0 | 150 | 0.3163 | 0.8752 |
| 0.1892 | 3.0 | 225 | 0.3276 | 0.8719 |
| 0.1722 | 4.0 | 300 | 0.3320 | 0.8769 |
| 0.1627 | 5.0 | 375 | 0.4052 | 0.8336 |
| 0.0774 | 6.0 | 450 | 0.4177 | 0.8636 |
| 0.0654 | 7.0 | 525 | 0.4298 | 0.8835 |
| 0.0585 | 8.0 | 600 | 0.5091 | 0.8586 |
| 0.0287 | 9.0 | 675 | 0.5515 | 0.8785 |
| 0.0409 | 10.0 | 750 | 0.6043 | 0.8735 |
| 0.0028 | 11.0 | 825 | 0.6415 | 0.8852 |
| 0.015 | 12.0 | 900 | 0.7011 | 0.8819 |
| 0.0186 | 13.0 | 975 | 0.7682 | 0.8669 |
| 0.0337 | 14.0 | 1050 | 0.7658 | 0.8669 |
| 0.0071 | 15.0 | 1125 | 0.7841 | 0.8636 |
| 0.0427 | 16.0 | 1200 | 0.8228 | 0.8702 |
| 0.0064 | 17.0 | 1275 | 0.8036 | 0.8752 |
| 0.0151 | 18.0 | 1350 | 0.8626 | 0.8719 |
| 0.0002 | 19.0 | 1425 | 0.8177 | 0.8819 |
| 0.0112 | 20.0 | 1500 | 0.8351 | 0.8636 |
| 0.0001 | 21.0 | 1575 | 0.8049 | 0.8869 |
| 0.0032 | 22.0 | 1650 | 0.8268 | 0.8735 |
| 0.0001 | 23.0 | 1725 | 0.8681 | 0.8752 |
| 0.0001 | 24.0 | 1800 | 0.8420 | 0.8686 |
| 0.0 | 25.0 | 1875 | 0.8252 | 0.8686 |
| 0.0001 | 26.0 | 1950 | 0.8504 | 0.8719 |
| 0.0079 | 27.0 | 2025 | 0.8787 | 0.8769 |
| 0.0047 | 28.0 | 2100 | 0.8232 | 0.8769 |
| 0.0019 | 29.0 | 2175 | 0.8813 | 0.8752 |
| 0.0048 | 30.0 | 2250 | 0.8417 | 0.8652 |
| 0.0115 | 31.0 | 2325 | 0.8526 | 0.8702 |
| 0.0 | 32.0 | 2400 | 0.8276 | 0.8835 |
| 0.0001 | 33.0 | 2475 | 0.8214 | 0.8835 |
| 0.0001 | 34.0 | 2550 | 0.8377 | 0.8802 |
| 0.0182 | 35.0 | 2625 | 0.8794 | 0.8735 |
| 0.0 | 36.0 | 2700 | 0.8604 | 0.8802 |
| 0.0001 | 37.0 | 2775 | 0.8420 | 0.8802 |
| 0.0 | 38.0 | 2850 | 0.9013 | 0.8802 |
| 0.0008 | 39.0 | 2925 | 0.8537 | 0.8819 |
| 0.0 | 40.0 | 3000 | 0.8672 | 0.8819 |
| 0.0029 | 41.0 | 3075 | 0.8548 | 0.8852 |
| 0.0 | 42.0 | 3150 | 0.8641 | 0.8835 |
| 0.002 | 43.0 | 3225 | 0.8645 | 0.8835 |
| 0.0052 | 44.0 | 3300 | 0.8649 | 0.8835 |
| 0.0 | 45.0 | 3375 | 0.8672 | 0.8835 |
| 0.0 | 46.0 | 3450 | 0.8698 | 0.8852 |
| 0.0018 | 47.0 | 3525 | 0.8729 | 0.8852 |
| 0.0 | 48.0 | 3600 | 0.8731 | 0.8852 |
| 0.0032 | 49.0 | 3675 | 0.8719 | 0.8852 |
| 0.0011 | 50.0 | 3750 | 0.8725 | 0.8852 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
JamesCS462/JamesCS462
|
JamesCS462
| 2023-12-06T08:25:25Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T07:46:11Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: JamesCS462/JamesCS462
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JamesCS462/JamesCS462
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0803
- Validation Loss: 2.9015
- Train Accuracy: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.1615 | 3.9126 | 1.0 | 0 |
| 3.8996 | 3.6375 | 1.0 | 1 |
| 3.6001 | 3.3828 | 1.0 | 2 |
| 3.3459 | 3.1305 | 1.0 | 3 |
| 3.0803 | 2.9015 | 1.0 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nguyenvulebinh/voice-filter
|
nguyenvulebinh
| 2023-12-06T08:14:38Z | 1,235 | 4 |
transformers
|
[
"transformers",
"pytorch",
"voicefilter",
"multilingual",
"arxiv:2308.11380",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-07T12:33:44Z |
---
language: multilingual
Tags:
- speech
license: apache-2.0
---
# ConVoiFilter
The ConVoiFilter model is designed for the purpose of filtering target speaker voices. For a comprehensive understanding of its functionality and methodology, please refer to the associated paper available at: https://arxiv.org/pdf/2308.11380.pdf
For loading and inferring model, you can utilize the provided Colab script. Simply click on the badge below to open the script in Google Colab:
[](https://colab.research.google.com/drive/1ekMhvwAEPI0stPRkoodLUlqX9LFuoObf?usp=sharing)
## Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
hkivancoral/smids_1x_deit_tiny_rms_00001_fold1
|
hkivancoral
| 2023-12-06T08:08:50Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T07:51:18Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_tiny_rms_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8631051752921536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_tiny_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8877
- Accuracy: 0.8631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4093 | 1.0 | 76 | 0.4066 | 0.8264 |
| 0.3822 | 2.0 | 152 | 0.3553 | 0.8614 |
| 0.1979 | 3.0 | 228 | 0.3399 | 0.8631 |
| 0.1648 | 4.0 | 304 | 0.3252 | 0.8815 |
| 0.0965 | 5.0 | 380 | 0.3551 | 0.8531 |
| 0.072 | 6.0 | 456 | 0.4036 | 0.8631 |
| 0.0292 | 7.0 | 532 | 0.4208 | 0.8598 |
| 0.0237 | 8.0 | 608 | 0.5314 | 0.8497 |
| 0.0407 | 9.0 | 684 | 0.5484 | 0.8497 |
| 0.0074 | 10.0 | 760 | 0.5780 | 0.8715 |
| 0.0366 | 11.0 | 836 | 0.5799 | 0.8631 |
| 0.0022 | 12.0 | 912 | 0.8054 | 0.8414 |
| 0.0514 | 13.0 | 988 | 0.5849 | 0.8748 |
| 0.0003 | 14.0 | 1064 | 0.6713 | 0.8664 |
| 0.0448 | 15.0 | 1140 | 0.6921 | 0.8715 |
| 0.0014 | 16.0 | 1216 | 0.6848 | 0.8631 |
| 0.0001 | 17.0 | 1292 | 0.7084 | 0.8648 |
| 0.0152 | 18.0 | 1368 | 0.8109 | 0.8681 |
| 0.0001 | 19.0 | 1444 | 0.7361 | 0.8698 |
| 0.004 | 20.0 | 1520 | 0.7743 | 0.8664 |
| 0.0035 | 21.0 | 1596 | 0.7272 | 0.8748 |
| 0.0282 | 22.0 | 1672 | 0.7515 | 0.8731 |
| 0.0001 | 23.0 | 1748 | 0.8060 | 0.8581 |
| 0.0001 | 24.0 | 1824 | 0.7763 | 0.8581 |
| 0.0156 | 25.0 | 1900 | 0.7302 | 0.8831 |
| 0.0068 | 26.0 | 1976 | 0.8763 | 0.8514 |
| 0.0045 | 27.0 | 2052 | 0.8144 | 0.8664 |
| 0.0058 | 28.0 | 2128 | 0.7716 | 0.8614 |
| 0.009 | 29.0 | 2204 | 0.8016 | 0.8664 |
| 0.0 | 30.0 | 2280 | 0.8234 | 0.8631 |
| 0.0087 | 31.0 | 2356 | 0.8420 | 0.8631 |
| 0.0102 | 32.0 | 2432 | 0.8218 | 0.8698 |
| 0.0 | 33.0 | 2508 | 0.8439 | 0.8564 |
| 0.0 | 34.0 | 2584 | 0.8448 | 0.8598 |
| 0.0154 | 35.0 | 2660 | 0.8638 | 0.8631 |
| 0.0044 | 36.0 | 2736 | 0.8664 | 0.8715 |
| 0.0088 | 37.0 | 2812 | 0.8649 | 0.8598 |
| 0.0 | 38.0 | 2888 | 0.8771 | 0.8598 |
| 0.0028 | 39.0 | 2964 | 0.8789 | 0.8631 |
| 0.0 | 40.0 | 3040 | 0.8645 | 0.8648 |
| 0.0044 | 41.0 | 3116 | 0.8681 | 0.8664 |
| 0.0 | 42.0 | 3192 | 0.8746 | 0.8631 |
| 0.0056 | 43.0 | 3268 | 0.8786 | 0.8664 |
| 0.0 | 44.0 | 3344 | 0.8858 | 0.8648 |
| 0.0 | 45.0 | 3420 | 0.8848 | 0.8648 |
| 0.0 | 46.0 | 3496 | 0.8858 | 0.8648 |
| 0.0 | 47.0 | 3572 | 0.8868 | 0.8631 |
| 0.0023 | 48.0 | 3648 | 0.8879 | 0.8631 |
| 0.0 | 49.0 | 3724 | 0.8884 | 0.8631 |
| 0.0 | 50.0 | 3800 | 0.8877 | 0.8631 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
donggyunkim/cnn_news_summary_model_trained_on_reduced_data
|
donggyunkim
| 2023-12-06T07:52:29Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-06T07:48:10Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: cnn_news_summary_model_trained_on_reduced_data
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: train[:2%]
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 0.2162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6625
- Rouge1: 0.2162
- Rouge2: 0.0943
- Rougel: 0.183
- Rougelsum: 0.183
- Generated Length: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 288 | 1.6773 | 0.2168 | 0.0946 | 0.1835 | 0.1836 | 19.0 |
| 1.9303 | 2.0 | 576 | 1.6625 | 0.2162 | 0.0943 | 0.183 | 0.183 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow15
|
FounderOfHuggingface
| 2023-12-06T07:48:44Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:43:51Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow14
|
FounderOfHuggingface
| 2023-12-06T07:47:58Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:42:49Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
jasonlau/en-grammar-correction
|
jasonlau
| 2023-12-06T07:37:56Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"doi:10.57967/hf/1295",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-22T15:54:49Z |
---
license: apache-2.0
pipeline_tag: text2text-generation
inference:
parameters:
max_length: 128
widget:
- text: "grammar: It was one of the most disarster examples."
example_title: "Adjective Error"
- text: "grammar: Although how I enjoyed it, there were still something needed to be improved."
example_title: "Adverb Error"
- text: "grammar: Although I'm not the best, but I have a lot of confident in it."
example_title: "Conjunction Error"
- text: "grammar: Finally, I will have high salary when I am Journalist."
example_title: "Determiner Error"
- text: "grammar: Gorky Park is my favorite place for my leisure."
example_title: "Noun Error"
- text: "grammar: At sum up, in my opinion travelling by car is better."
example_title: "Particle Error"
- text: "grammar: I suggest changing the old buses by new buses and put security cameras in the buses."
example_title: "Preposition Error"
- text: "grammar: Maybe in the future will be worse."
example_title: "Pronoun Error"
- text: "grammar: You can listen live music or a DJ on Friday evening, it 's really fun!"
example_title: "Punctuation Error"
- text: "grammar: First of all, I am going to talk about adavantages and disadvantages for this topic."
example_title: "Spelling Error"
- text: "grammar: I start to hunt when I was seven years old."
example_title: "Verb Error"
- text: "grammar: I was impressed because I' ve never seen before such films."
example_title: "Word Order Error"
---
### An NMT-Based Grammatical Error Correction System for Chinese EFL Learners' Writing
#### Data
[Lang-8 Corpus of Learner English](https://github.com/google-research-datasets/clang8) and [The National University of Singapore Corpus of Learner English (NUCLE)](https://www.cl.cam.ac.uk/research/nl/bea2019st/).
#### Usage
```python
import torch
from transformers import (
T5Tokenizer,
T5ForConditionalGeneration
)
model_name = "jasonlau/en-grammar-correction"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
gec = GEC(model, tokenizer)
orig = 'Lets try Transformers!'
cor = gec.correct(orig)
print(cor)
class GEC(object):
def __init__(self, model, tokenizer):
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.device = torch_device
self.model = model.to(torch_device)
self.tokenizer = tokenizer
def correct(self, input):
prefix = "grammar: "
input = prefix + input
input_ids = self.tokenizer(input, return_tensors="pt").input_ids.to(self.device)
outputs = self.model.generate(input_ids, max_length=128, num_beams=5, early_stopping=True)
cor = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
return cor
```
#### Author
Lei Liu \
Departmemt of English \
Yanshan University
#### Funding
The work is supported by the MOE Foundation of Humanities and Social Sciences (Grant No.
17YJC740055).
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow11
|
FounderOfHuggingface
| 2023-12-06T07:33:46Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:32:20Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow9
|
FounderOfHuggingface
| 2023-12-06T07:30:43Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:30:11Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow7
|
FounderOfHuggingface
| 2023-12-06T07:29:12Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:28:19Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
hkivancoral/smids_1x_deit_tiny_rms_0001_fold5
|
hkivancoral
| 2023-12-06T07:27:58Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-06T07:10:43Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_tiny_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8516666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Accuracy: 0.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7851 | 1.0 | 75 | 0.8659 | 0.64 |
| 0.6518 | 2.0 | 150 | 0.7541 | 0.6467 |
| 0.4507 | 3.0 | 225 | 0.6126 | 0.755 |
| 0.4597 | 4.0 | 300 | 0.4698 | 0.805 |
| 0.3528 | 5.0 | 375 | 0.4309 | 0.835 |
| 0.2717 | 6.0 | 450 | 0.4110 | 0.8517 |
| 0.2211 | 7.0 | 525 | 0.5132 | 0.8283 |
| 0.1873 | 8.0 | 600 | 0.5255 | 0.835 |
| 0.1509 | 9.0 | 675 | 0.5409 | 0.85 |
| 0.06 | 10.0 | 750 | 0.7466 | 0.8333 |
| 0.1297 | 11.0 | 825 | 0.8027 | 0.835 |
| 0.0789 | 12.0 | 900 | 0.7518 | 0.8417 |
| 0.1522 | 13.0 | 975 | 0.7901 | 0.8533 |
| 0.0628 | 14.0 | 1050 | 0.8326 | 0.845 |
| 0.0732 | 15.0 | 1125 | 0.9433 | 0.8317 |
| 0.0276 | 16.0 | 1200 | 0.9028 | 0.845 |
| 0.0402 | 17.0 | 1275 | 0.8882 | 0.8617 |
| 0.0561 | 18.0 | 1350 | 0.9516 | 0.8367 |
| 0.0072 | 19.0 | 1425 | 1.0341 | 0.8467 |
| 0.0251 | 20.0 | 1500 | 1.0436 | 0.8433 |
| 0.0171 | 21.0 | 1575 | 0.8887 | 0.855 |
| 0.0141 | 22.0 | 1650 | 0.9265 | 0.8517 |
| 0.0297 | 23.0 | 1725 | 1.1359 | 0.8383 |
| 0.0008 | 24.0 | 1800 | 1.0337 | 0.8567 |
| 0.0322 | 25.0 | 1875 | 0.8913 | 0.87 |
| 0.0416 | 26.0 | 1950 | 0.9175 | 0.84 |
| 0.0268 | 27.0 | 2025 | 0.9551 | 0.86 |
| 0.0237 | 28.0 | 2100 | 1.0150 | 0.8533 |
| 0.0252 | 29.0 | 2175 | 0.8872 | 0.8617 |
| 0.0035 | 30.0 | 2250 | 0.9489 | 0.8633 |
| 0.0155 | 31.0 | 2325 | 1.0473 | 0.8417 |
| 0.0007 | 32.0 | 2400 | 0.9648 | 0.8533 |
| 0.0102 | 33.0 | 2475 | 1.0603 | 0.8517 |
| 0.0 | 34.0 | 2550 | 1.0445 | 0.8533 |
| 0.0057 | 35.0 | 2625 | 1.0369 | 0.8467 |
| 0.0 | 36.0 | 2700 | 1.0577 | 0.8517 |
| 0.004 | 37.0 | 2775 | 1.0782 | 0.845 |
| 0.0033 | 38.0 | 2850 | 1.1658 | 0.8433 |
| 0.0001 | 39.0 | 2925 | 1.0942 | 0.8533 |
| 0.0 | 40.0 | 3000 | 1.1718 | 0.8467 |
| 0.0038 | 41.0 | 3075 | 1.1726 | 0.855 |
| 0.0 | 42.0 | 3150 | 1.1472 | 0.85 |
| 0.008 | 43.0 | 3225 | 1.1850 | 0.8517 |
| 0.0008 | 44.0 | 3300 | 1.1576 | 0.845 |
| 0.0022 | 45.0 | 3375 | 1.1935 | 0.855 |
| 0.0 | 46.0 | 3450 | 1.1973 | 0.8533 |
| 0.0056 | 47.0 | 3525 | 1.2032 | 0.8533 |
| 0.0051 | 48.0 | 3600 | 1.2041 | 0.8533 |
| 0.0 | 49.0 | 3675 | 1.2053 | 0.8517 |
| 0.0043 | 50.0 | 3750 | 1.2046 | 0.8517 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow5
|
FounderOfHuggingface
| 2023-12-06T07:27:19Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:27:11Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow4
|
FounderOfHuggingface
| 2023-12-06T07:26:48Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:26:11Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_non_member_shadow1
|
FounderOfHuggingface
| 2023-12-06T07:18:53Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-06T07:11:41Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
lrzjason/playground-v2-1024px-aesthetic-fp16
|
lrzjason
| 2023-12-06T07:03:48Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-12-06T06:55:14Z |
---
license: other
license_name: playground-v2-community-license
license_link: >-
https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic/blob/main/LICENSE.md
---
This model is the fp16 version of playgroundai/playground-v2-1024px-aesthetic
License:
https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic/blob/main/LICENSE.md
|
matansol/poca-SoccerTwos
|
matansol
| 2023-12-06T06:48:46Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-12-06T06:48:02Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: matansol/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jxter/vit-base-patch16-224-in21k-finetuned-lora-food101
|
Jxter
| 2023-12-06T06:47:17Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-30T01:16:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.