modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
artificialguybr/clayanimationredmond-1-5-version-clay-animation-lora-for-sd-1-5 | artificialguybr | 2023-11-19T18:33:01Z | 6 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"clay animation",
"clay",
"style",
"claymore",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:other",
"region:us"
]
| text-to-image | 2023-11-19T18:33:00Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- clay animation
- clay
- style
- claymore
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Clay Animation
widget:
- text: 'A cute blonde girl,
,Clay Animation, Clay,'
output:
url: >-
3755655.jpeg
- text: 'A cute blonde girl,
,Clay Animation, Clay,'
output:
url: >-
3755653.jpeg
- text: 'A cat wearing sunglasses, portrait,
Clay Animation, Clay,'
output:
url: >-
3755646.jpeg
- text: 'A elephant,
Clay Animation, Clay,'
output:
url: >-
3755648.jpeg
- text: 'A cute chicken
,Clay Animation, Clay,'
output:
url: >-
3755660.jpeg
- text: 'Boy wearing shorts in front of a beach,
,Clay Animation, Clay,'
output:
url: >-
3755665.jpeg
- text: 'A clown, creepy, horror, terror, dark scene, dark, miniature,
,(((Clay Animation, Clay))), '
output:
url: >-
3755666.jpeg
- text: 'A clown, creepy, horror, terror, dark scene, dark, miniature,
,(((Clay Animation, Clay))), '
output:
url: >-
3755669.jpeg
- text: 'Donald Trump,
Clay Animation, Clay,'
output:
url: >-
3755670.jpeg
- text: 'A cat wearing sunglasses, portrait,
Clay Animation, Clay,'
output:
url: >-
3755672.jpeg
---
# ClayAnimationRedmond: 1.5 Version - Clay Animation Lora for SD 1.5
<Gallery />
<h1 id="heading-28">ClayAnimation.Redmond 1.5 Version is here!</h1><p>Introducing ClayAnimation.Redmond 1.5 Version, the ultimate LORA for creating Clay Animation images for SD 1.5!</p><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to make this LORA! If you need GPU, then you need the great services from <a target="_blank" rel="ugc" href="http://Redmond.AI">Redmond.AI</a>.</p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p><span style="color:rgb(210, 208, 206)">It is based on </span><strong><span style="color:rgb(210, 208, 206)">SD 1.5 using LIBERTE REDMOND Model as base</span></strong><span style="color:rgb(210, 208, 206)"> and fine-tuned on a large dataset</span><strong><span style="color:rgb(210, 208, 206)">.</span></strong></p><p>The LORA has a high capacity to generate Coloring Book Images!</p><h3 id="heading-38"><strong><u>The tag for the model:Clay Animation, Clay</u></strong></h3><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.</p><p>Patreon:</p><p><a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">https://www.patreon.com/user?u=81570187</a></p><p>Ko-fi:<a target="_blank" rel="ugc" href="https://ko-fi.com/artificialguybr">https://ko-fi.com/artificialguybr</a></p><p>BuyMeACoffe:<a target="_blank" rel="ugc" href="https://www.buymeacoffee.com/jvkape">https://www.buymeacoffee.com/jvkape</a></p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p>
## Image examples for the model:

> A cute blonde girl,
,Clay Animation, Clay,

> A cat wearing sunglasses, portrait,
Clay Animation, Clay,

> A elephant,
Clay Animation, Clay,

> A cute chicken
,Clay Animation, Clay,

> Boy wearing shorts in front of a beach,
,Clay Animation, Clay,

> A clown, creepy, horror, terror, dark scene, dark, miniature,
,(((Clay Animation, Clay))),

> A clown, creepy, horror, terror, dark scene, dark, miniature,
,(((Clay Animation, Clay))),

> Donald Trump,
Clay Animation, Clay,

> A cat wearing sunglasses, portrait,
Clay Animation, Clay,
|
AlexDLP/taxi3 | AlexDLP | 2023-11-19T18:32:11Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T18:32:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AlexDLP/taxi3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/Yi-34B-Spicyboros-3.1-3-6.0bpw-h6-exl2 | LoneStriker | 2023-11-19T18:30:32Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:unalignment/spicy-3.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T18:03:22Z | ---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- unalignment/spicy-3.1
---
# Fine-tune of Y-34B with Spicyboros-3.1-3
Three epochs of fine tuning with @jondurbin's SpicyBoros-3.1 dataset. 5.0bpw and 5.15bpw should fit on a single 3090/4090 (may need to enable 8-bit cache), 6.0bpw, and 8.0bpw will require more than one GPU 24 GB VRAM GPU.
**Please note:** you may have to turn down repetition penalty to ~1.0. The model seems to get into "thesaurus" mode sometimes without this change.
# Original Yi-34B Model Card Below
<div align="center">
<h1>
Yi
</h1>
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two base models with the parameter size of 6B and 34B.
## News
- 🎯 **2023/11/02**: The base model of `Yi-6B` and `Yi-34B`
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Commonsense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :-------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | 39.8 |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 26.0 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| **Yi-34B** | **76.3** | **83.7** | **81.4** | **82.8** | **54.3** | **80.1** | **76.4** | **37.1** |
While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCampus). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that did not report by original author (including score reported with different setting), we try to get results with our pipeline.
To extensively evaluate model's capability, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted in a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
## Disclaimer
Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.
## License
The Yi series model must be adhere to the [Model License Agreement](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE).
For any questions related to licensing and copyright, please contact us ([[email protected]](mailto:[email protected])).
|
SaikouDT1/API_connect | SaikouDT1 | 2023-11-19T18:30:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-11-19T18:26:57Z | # DT1 2023 : Enabling Technologies, Assignment 1
The assignment description and requirements are on Moodle.
## Introduction
The small flask application presented here is a proxy that we use to interact with Huggingface Inference API.
Your job is to deploy and interact with Huggingface API using this proxy. The interface will be built
using no-code Bubble. </br></br>
This assignment covers the first four learning cycles:
- Software Architecture
- Bubble
- API Design
- Cloud Computing
Different aspects of the assignment covers of all four cycles. Please go back to the lecture materials
in case there is something you don't understand. In addition, we provide materials here that we believe
will act as further hints for successfully completing the assignment.
**NOTE: The code has been developed and tested on Ubuntu (Debian). This is the OS you will be using on the Google
Cloud Platform. For local testing, you might need to find the appropriate information, if needed.**
## Codebase
- main.py: contains all the code for the proxy including the API routes.
- Pipfile, Pipfile.lock: dependency file for running the codebase
- Dockerfile: docker build configuration
## Supplementary Materials
### Software Architecture
- You can use TLDraw for diagramming: https://www.tldraw.com/ [you can use any other tool if you prefer, like LucidCharts, Miro etc.]
### Bubble
### API Design
- Curl for testing your docker container locally: https://daniel.haxx.se/blog/2021/05/31/curl-localhost-as-a-local-host/
### Cloud Computing
- Installing Docker on Debian: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-debian-10
- You can use ```sudo passwd``` to set the password once you login to your GCP VM.
- Dockerize your application: https://docs.docker.com/get-started/02_our_app/
- Run Dockerfile: https://docs.docker.com/language/java/run-containers/
- Docker image to Docker Hub: https://docs.docker.com/get-started/04_sharing_app/
- Firewall Rules on GCP: https://www.howtogeek.com/devops/how-to-open-firewall-ports-on-a-gcp-compute-engine-instance/
- In case you would like to work using your local terminal: https://cloud.google.com/sdk/docs/install-sdk#linux |
LoneStriker/Karen_TheEditor_V2_STRICT_Mistral_7B-4.0bpw-h6-exl2 | LoneStriker | 2023-11-19T18:28:19Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llm",
"llama",
"spellcheck",
"grammar",
"conversational",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T18:15:06Z | ---
tags:
- llm
- llama
- spellcheck
- grammar
license: llama2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B/resolve/main/karen2.jpg" alt="FPHam's Karen v2" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
# Karen is an editor for your text. (v.2) STRICT edition
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in grammatical wisdom but respectfull of your style.
# Info
Karen, Version 2, uses a completely different data set and base model than the previous Karen.
# There are two versions of Karen V2
1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
2. Creative (to be uploaded), in which Karen may suggest slight contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
# Goals
Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors.
Verb Tense Errors:
Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
Confusion between continuous and simple tenses.
Subject-Verb Agreement:
Lack of agreement between the subject and verb in number, e.g., using a singular verb with a plural subject or vice versa.
Articles (a, an, the):
Incorrect use or omission of articles, such as using "a" instead of "an" or vice versa.
Overuse or omission of the definite article "the."
Prepositions:
Misuse of prepositions, such as using "in" instead of "on" or "at," or omitting prepositions where they are needed.
Word Order:
Incorrect word order in sentences, especially in questions and negative sentences.
Misplacement of adverbs or adjectives.
Pluralization:
Incorrect plural forms of nouns, such as failing to add "-s" or "-es" when necessary.
Pronoun Errors:
Confusion between subject and object pronouns.
Incorrect use of possessive pronouns.
Double Negatives:
Using double negatives, which is grammatically incorrect in standard English.
Modal Verbs:
Misuse of modal verbs like can, could, will, would, should, etc.
Confusing Similar Words:
Confusing words that sound similar but have different meanings and spellings (e.g., "their," "there," and "they're").
Lack of Plural/Singular Agreement:
Mistakes in matching singular and plural nouns and verbs in a sentence.
# Future Goals
Use bigger model, add grammar cases that the model misses. Better datasets. Use larger datasets.
# Training
It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script.
# Usage
It should be used by submitting a paragraph or block of text at a time.
# Model uses ChatML
```
<|im_start|>system
<|im_end|>
<|im_start|>user
Edit the following text for spelling and grammar mistakes: {paragraph of text} <|im_end|>
<|im_start|>assistant
```
Note the pretext: *Edit the following text for spelling and grammar mistakes:* before the actual text. This way Karen wouldn't start talking ABOUT the text.
# Recomended settings
- Temperature: 0.7
- top_p: 0.1
- top_k: 40
- repetition penalty: 1.18
Karen could also be used in chatting. But if Karen is involved in a conversation where the text is lengthy, she might interpret your message as needing proofreading assistance rather than simply acknowledging that you are using her in casual chat.
Example:
>Edit the following text for spelling and grammar mistakes: I see future writing more about you writing the story any way you can and LLM editing it and making it into a proper fiction. That seems to be more satisfying way where you could feel you actually had done something creATIVE. So human is the creative, LLM is the burger flipping intern.
>The opposite is what many people are trying rn. Make LLM to invent story (from premise, to creating outline to writing actual chapters) and the human to somehow assemble it from those pieces. So LLM is the creative man, and human is the floor polisher.
>I can't believe a normal person would find it satisfying long term.
>So in my view, the job of editor would be replaced by Ai, not the job of author.
(Grammarly Score: 48)
Output STRICT:
>I see the future of writing being more about you writing the story any way you can and having an AI like LLM edit it and make it into a proper fiction. That seems to be a more satisfying way where you could feel you actually did something creative. So humans are the creative ones, while LLMs are the burger-flipping interns.
>
>On the other hand, many people are currently trying to do the exact opposite. They want to make LLMs create stories (from premises, to outlining, to writing entire chapters). And then they want humans to somehow piece them together from these parts. In this scenario, LLMs become the creative individuals, and humans become the floor polishers.
>
>I can't imagine a regular person finding that satisfying over the long run.
>
>In my opinion, the role of an editor will eventually be taken over by AI, not the role of an author.
(Grammarly Score: 93)
Output CREATIVE:
>I see the future of writing being more focused on you writing the story any way you can and having an AI like GPT-3 edit it and turn it into a proper piece of fiction. This seems to be a more satisfying approach where you can truly feel that you have done something creative. In this scenario, humans remain the creative force, while the AI acts as the burger flipping intern.
>
>On the other hand, there are those who argue that we should let the AI generate stories from scratch - from coming up with a premise, to outlining the plot, to writing individual chapters. They believe that the role of the human should then become one of assembling these pieces together. In this case, the AI becomes the creative mind, and the human takes on the role of a floor polisher.
>
>I cannot imagine how someone could find this satisfying over the long term. Therefore, in my opinion, the job of the editor will eventually be taken over by AI, not the job of the author.
(Grammarly Score: 83)
# Conclusion
After probably 10 different versions with subsequent changes, I can now say that the current model works reasonably well, with occasional (but often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size isn't sufficient to have a fine-grained understanding of various nuances of the input. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to only 7B parameters.
The goal was to create a model that wouldn't change the style of the text. Often, LLM models, when asked to edit text, will attempt to rewrite the text even if the text is already fine. This proved to be quite challenging for such a small model where the main task was to determine the right balance between fixing the text (and not changing its style) and copying it verbatim.
The strict model assumes that you're already a good writer that doesn't need hand-holding and that every word you've written you've meant. |
hkivancoral/hushem_5x_deit_small_rms_001_fold5 | hkivancoral | 2023-11-19T18:25:38Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T18:15:34Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_small_rms_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6585365853658537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2275
- Accuracy: 0.6585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7524 | 1.0 | 28 | 1.5298 | 0.2439 |
| 1.4312 | 2.0 | 56 | 1.4291 | 0.2683 |
| 1.3924 | 3.0 | 84 | 1.4059 | 0.2927 |
| 1.4173 | 4.0 | 112 | 1.3938 | 0.2683 |
| 1.3939 | 5.0 | 140 | 1.3790 | 0.2683 |
| 1.3863 | 6.0 | 168 | 1.4164 | 0.2439 |
| 1.3865 | 7.0 | 196 | 1.3790 | 0.2683 |
| 1.394 | 8.0 | 224 | 1.3790 | 0.2683 |
| 1.3883 | 9.0 | 252 | 1.4097 | 0.2683 |
| 1.3472 | 10.0 | 280 | 1.2478 | 0.4390 |
| 1.3905 | 11.0 | 308 | 1.2068 | 0.3902 |
| 1.1031 | 12.0 | 336 | 1.2038 | 0.4390 |
| 1.1503 | 13.0 | 364 | 1.0846 | 0.4634 |
| 1.2064 | 14.0 | 392 | 1.1395 | 0.4146 |
| 1.1249 | 15.0 | 420 | 1.1544 | 0.4146 |
| 1.1285 | 16.0 | 448 | 1.0714 | 0.4634 |
| 1.1149 | 17.0 | 476 | 0.9771 | 0.6098 |
| 1.0493 | 18.0 | 504 | 0.9974 | 0.4634 |
| 0.9938 | 19.0 | 532 | 0.9792 | 0.5366 |
| 1.0212 | 20.0 | 560 | 0.9949 | 0.5854 |
| 0.9943 | 21.0 | 588 | 1.0078 | 0.5366 |
| 1.0044 | 22.0 | 616 | 0.9007 | 0.5366 |
| 1.0661 | 23.0 | 644 | 1.2742 | 0.4878 |
| 0.9523 | 24.0 | 672 | 0.9851 | 0.6829 |
| 0.8733 | 25.0 | 700 | 0.9430 | 0.5854 |
| 0.8075 | 26.0 | 728 | 0.9660 | 0.6585 |
| 0.9128 | 27.0 | 756 | 0.9161 | 0.7561 |
| 0.8898 | 28.0 | 784 | 0.8767 | 0.7073 |
| 0.8051 | 29.0 | 812 | 0.8174 | 0.6829 |
| 0.8328 | 30.0 | 840 | 0.8077 | 0.6585 |
| 0.81 | 31.0 | 868 | 0.7911 | 0.6585 |
| 0.7372 | 32.0 | 896 | 1.0262 | 0.6585 |
| 0.7641 | 33.0 | 924 | 1.0698 | 0.5854 |
| 0.7745 | 34.0 | 952 | 0.8530 | 0.6829 |
| 0.7037 | 35.0 | 980 | 1.0106 | 0.6585 |
| 0.7449 | 36.0 | 1008 | 0.8975 | 0.7073 |
| 0.7391 | 37.0 | 1036 | 0.9607 | 0.6829 |
| 0.7447 | 38.0 | 1064 | 1.0096 | 0.6585 |
| 0.7043 | 39.0 | 1092 | 1.0986 | 0.7073 |
| 0.6379 | 40.0 | 1120 | 1.0787 | 0.6829 |
| 0.6476 | 41.0 | 1148 | 1.0057 | 0.6829 |
| 0.5799 | 42.0 | 1176 | 1.1714 | 0.6341 |
| 0.5954 | 43.0 | 1204 | 1.1356 | 0.6829 |
| 0.6189 | 44.0 | 1232 | 1.1609 | 0.6829 |
| 0.5672 | 45.0 | 1260 | 1.1726 | 0.6829 |
| 0.5115 | 46.0 | 1288 | 1.2388 | 0.6829 |
| 0.4522 | 47.0 | 1316 | 1.2273 | 0.6829 |
| 0.4728 | 48.0 | 1344 | 1.2290 | 0.6585 |
| 0.4195 | 49.0 | 1372 | 1.2275 | 0.6585 |
| 0.4871 | 50.0 | 1400 | 1.2275 | 0.6585 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
KareenaBeniwal/fine-tune-qna | KareenaBeniwal | 2023-11-19T18:24:51Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:Bhautiksinh/BertPretrain",
"base_model:finetune:Bhautiksinh/BertPretrain",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-11-19T15:25:29Z | ---
base_model: Bhautiksinh/BertPretrain
tags:
- generated_from_keras_callback
model-index:
- name: fine-tune-qna
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fine-tune-qna
This model is a fine-tuned version of [Bhautiksinh/BertPretrain](https://huggingface.co/Bhautiksinh/BertPretrain) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1155
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 4.0187 | 0 |
| 4.1155 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Karen_TheEditor_V2_STRICT_Mistral_7B-3.0bpw-h6-exl2 | LoneStriker | 2023-11-19T18:23:53Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llm",
"llama",
"spellcheck",
"grammar",
"conversational",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T18:06:41Z | ---
tags:
- llm
- llama
- spellcheck
- grammar
license: llama2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B/resolve/main/karen2.jpg" alt="FPHam's Karen v2" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
# Karen is an editor for your text. (v.2) STRICT edition
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in grammatical wisdom but respectfull of your style.
# Info
Karen, Version 2, uses a completely different data set and base model than the previous Karen.
# There are two versions of Karen V2
1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
2. Creative (to be uploaded), in which Karen may suggest slight contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
# Goals
Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors.
Verb Tense Errors:
Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
Confusion between continuous and simple tenses.
Subject-Verb Agreement:
Lack of agreement between the subject and verb in number, e.g., using a singular verb with a plural subject or vice versa.
Articles (a, an, the):
Incorrect use or omission of articles, such as using "a" instead of "an" or vice versa.
Overuse or omission of the definite article "the."
Prepositions:
Misuse of prepositions, such as using "in" instead of "on" or "at," or omitting prepositions where they are needed.
Word Order:
Incorrect word order in sentences, especially in questions and negative sentences.
Misplacement of adverbs or adjectives.
Pluralization:
Incorrect plural forms of nouns, such as failing to add "-s" or "-es" when necessary.
Pronoun Errors:
Confusion between subject and object pronouns.
Incorrect use of possessive pronouns.
Double Negatives:
Using double negatives, which is grammatically incorrect in standard English.
Modal Verbs:
Misuse of modal verbs like can, could, will, would, should, etc.
Confusing Similar Words:
Confusing words that sound similar but have different meanings and spellings (e.g., "their," "there," and "they're").
Lack of Plural/Singular Agreement:
Mistakes in matching singular and plural nouns and verbs in a sentence.
# Future Goals
Use bigger model, add grammar cases that the model misses. Better datasets. Use larger datasets.
# Training
It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script.
# Usage
It should be used by submitting a paragraph or block of text at a time.
# Model uses ChatML
```
<|im_start|>system
<|im_end|>
<|im_start|>user
Edit the following text for spelling and grammar mistakes: {paragraph of text} <|im_end|>
<|im_start|>assistant
```
Note the pretext: *Edit the following text for spelling and grammar mistakes:* before the actual text. This way Karen wouldn't start talking ABOUT the text.
# Recomended settings
- Temperature: 0.7
- top_p: 0.1
- top_k: 40
- repetition penalty: 1.18
Karen could also be used in chatting. But if Karen is involved in a conversation where the text is lengthy, she might interpret your message as needing proofreading assistance rather than simply acknowledging that you are using her in casual chat.
Example:
>Edit the following text for spelling and grammar mistakes: I see future writing more about you writing the story any way you can and LLM editing it and making it into a proper fiction. That seems to be more satisfying way where you could feel you actually had done something creATIVE. So human is the creative, LLM is the burger flipping intern.
>The opposite is what many people are trying rn. Make LLM to invent story (from premise, to creating outline to writing actual chapters) and the human to somehow assemble it from those pieces. So LLM is the creative man, and human is the floor polisher.
>I can't believe a normal person would find it satisfying long term.
>So in my view, the job of editor would be replaced by Ai, not the job of author.
(Grammarly Score: 48)
Output STRICT:
>I see the future of writing being more about you writing the story any way you can and having an AI like LLM edit it and make it into a proper fiction. That seems to be a more satisfying way where you could feel you actually did something creative. So humans are the creative ones, while LLMs are the burger-flipping interns.
>
>On the other hand, many people are currently trying to do the exact opposite. They want to make LLMs create stories (from premises, to outlining, to writing entire chapters). And then they want humans to somehow piece them together from these parts. In this scenario, LLMs become the creative individuals, and humans become the floor polishers.
>
>I can't imagine a regular person finding that satisfying over the long run.
>
>In my opinion, the role of an editor will eventually be taken over by AI, not the role of an author.
(Grammarly Score: 93)
Output CREATIVE:
>I see the future of writing being more focused on you writing the story any way you can and having an AI like GPT-3 edit it and turn it into a proper piece of fiction. This seems to be a more satisfying approach where you can truly feel that you have done something creative. In this scenario, humans remain the creative force, while the AI acts as the burger flipping intern.
>
>On the other hand, there are those who argue that we should let the AI generate stories from scratch - from coming up with a premise, to outlining the plot, to writing individual chapters. They believe that the role of the human should then become one of assembling these pieces together. In this case, the AI becomes the creative mind, and the human takes on the role of a floor polisher.
>
>I cannot imagine how someone could find this satisfying over the long term. Therefore, in my opinion, the job of the editor will eventually be taken over by AI, not the job of the author.
(Grammarly Score: 83)
# Conclusion
After probably 10 different versions with subsequent changes, I can now say that the current model works reasonably well, with occasional (but often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size isn't sufficient to have a fine-grained understanding of various nuances of the input. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to only 7B parameters.
The goal was to create a model that wouldn't change the style of the text. Often, LLM models, when asked to edit text, will attempt to rewrite the text even if the text is already fine. This proved to be quite challenging for such a small model where the main task was to determine the right balance between fixing the text (and not changing its style) and copying it verbatim.
The strict model assumes that you're already a good writer that doesn't need hand-holding and that every word you've written you've meant. |
Ka4on/results | Ka4on | 2023-11-19T18:20:54Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-12T23:00:22Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9667 | 0.07 | 500 | 0.8561 |
| 0.8253 | 0.14 | 1000 | 0.7976 |
| 0.7771 | 0.2 | 1500 | 0.7676 |
| 0.7623 | 0.27 | 2000 | 0.7459 |
| 0.7399 | 0.34 | 2500 | 0.7269 |
| 0.7253 | 0.41 | 3000 | 0.7166 |
| 0.7241 | 0.47 | 3500 | 0.7035 |
| 0.7063 | 0.54 | 4000 | 0.6962 |
| 0.6857 | 0.61 | 4500 | 0.6883 |
| 0.6909 | 0.68 | 5000 | 0.6829 |
| 0.6754 | 0.75 | 5500 | 0.6731 |
| 0.6803 | 0.81 | 6000 | 0.6657 |
| 0.6659 | 0.88 | 6500 | 0.6599 |
| 0.6603 | 0.95 | 7000 | 0.6556 |
| 0.6249 | 1.02 | 7500 | 0.6610 |
| 0.53 | 1.09 | 8000 | 0.6583 |
| 0.5246 | 1.15 | 8500 | 0.6544 |
| 0.5204 | 1.22 | 9000 | 0.6515 |
| 0.5135 | 1.29 | 9500 | 0.6498 |
| 0.5165 | 1.36 | 10000 | 0.6433 |
| 0.518 | 1.42 | 10500 | 0.6410 |
| 0.5032 | 1.49 | 11000 | 0.6368 |
| 0.5091 | 1.56 | 11500 | 0.6335 |
| 0.5038 | 1.63 | 12000 | 0.6307 |
| 0.4907 | 1.7 | 12500 | 0.6302 |
| 0.5006 | 1.76 | 13000 | 0.6262 |
| 0.4823 | 1.83 | 13500 | 0.6239 |
| 0.4906 | 1.9 | 14000 | 0.6225 |
| 0.4905 | 1.97 | 14500 | 0.6218 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Ka4on/mistral_final | Ka4on | 2023-11-19T18:17:52Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
]
| null | 2023-11-19T18:16:14Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
hkivancoral/hushem_5x_deit_small_rms_001_fold4 | hkivancoral | 2023-11-19T18:15:15Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T18:05:14Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_small_rms_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5952380952380952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6694
- Accuracy: 0.5952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1638 | 1.0 | 28 | 1.7503 | 0.2381 |
| 1.4446 | 2.0 | 56 | 1.5611 | 0.2619 |
| 1.4481 | 3.0 | 84 | 1.4312 | 0.2381 |
| 1.3982 | 4.0 | 112 | 1.3919 | 0.2619 |
| 1.3867 | 5.0 | 140 | 1.4053 | 0.2619 |
| 1.382 | 6.0 | 168 | 1.3617 | 0.2619 |
| 1.2911 | 7.0 | 196 | 1.5439 | 0.4048 |
| 1.1486 | 8.0 | 224 | 1.1564 | 0.4286 |
| 1.0554 | 9.0 | 252 | 1.0568 | 0.4762 |
| 1.0402 | 10.0 | 280 | 0.8946 | 0.6190 |
| 0.9192 | 11.0 | 308 | 0.7214 | 0.7381 |
| 1.0116 | 12.0 | 336 | 0.8931 | 0.6905 |
| 0.9735 | 13.0 | 364 | 0.8359 | 0.6905 |
| 0.9105 | 14.0 | 392 | 0.6761 | 0.7619 |
| 0.8218 | 15.0 | 420 | 0.6339 | 0.7857 |
| 0.8745 | 16.0 | 448 | 0.7396 | 0.7619 |
| 0.8355 | 17.0 | 476 | 0.7738 | 0.7381 |
| 0.8644 | 18.0 | 504 | 0.6532 | 0.7619 |
| 0.8014 | 19.0 | 532 | 0.7016 | 0.7381 |
| 0.8685 | 20.0 | 560 | 0.7175 | 0.7381 |
| 0.7709 | 21.0 | 588 | 0.6588 | 0.7619 |
| 0.778 | 22.0 | 616 | 0.8635 | 0.7381 |
| 0.8232 | 23.0 | 644 | 0.6385 | 0.7143 |
| 0.891 | 24.0 | 672 | 0.7133 | 0.6667 |
| 0.714 | 25.0 | 700 | 0.6807 | 0.6905 |
| 0.6766 | 26.0 | 728 | 0.9128 | 0.6429 |
| 0.734 | 27.0 | 756 | 0.7515 | 0.6905 |
| 0.7087 | 28.0 | 784 | 0.6378 | 0.6905 |
| 0.6295 | 29.0 | 812 | 0.9113 | 0.6667 |
| 0.6414 | 30.0 | 840 | 0.9201 | 0.6190 |
| 0.6359 | 31.0 | 868 | 0.7354 | 0.7143 |
| 0.6485 | 32.0 | 896 | 0.6558 | 0.6429 |
| 0.6242 | 33.0 | 924 | 0.7790 | 0.6429 |
| 0.647 | 34.0 | 952 | 1.0490 | 0.5952 |
| 0.6524 | 35.0 | 980 | 0.7508 | 0.6667 |
| 0.5325 | 36.0 | 1008 | 0.9344 | 0.6667 |
| 0.476 | 37.0 | 1036 | 1.0580 | 0.5952 |
| 0.4941 | 38.0 | 1064 | 0.9380 | 0.7143 |
| 0.4232 | 39.0 | 1092 | 1.0384 | 0.5476 |
| 0.4302 | 40.0 | 1120 | 1.0844 | 0.6190 |
| 0.4057 | 41.0 | 1148 | 1.3995 | 0.5952 |
| 0.3483 | 42.0 | 1176 | 1.4823 | 0.5476 |
| 0.3043 | 43.0 | 1204 | 1.2186 | 0.6667 |
| 0.2598 | 44.0 | 1232 | 1.3028 | 0.5952 |
| 0.2113 | 45.0 | 1260 | 1.5042 | 0.6190 |
| 0.2104 | 46.0 | 1288 | 1.6174 | 0.5952 |
| 0.1769 | 47.0 | 1316 | 1.5011 | 0.6429 |
| 0.1341 | 48.0 | 1344 | 1.6784 | 0.5714 |
| 0.1239 | 49.0 | 1372 | 1.6694 | 0.5952 |
| 0.1545 | 50.0 | 1400 | 1.6694 | 0.5952 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
PaulaLi16/Medalpaca | PaulaLi16 | 2023-11-19T18:06:04Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:medalpaca/medalpaca-7b",
"base_model:adapter:medalpaca/medalpaca-7b",
"region:us"
]
| null | 2023-11-19T18:05:33Z | ---
library_name: peft
base_model: medalpaca/medalpaca-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
hkivancoral/hushem_5x_deit_small_rms_001_fold3 | hkivancoral | 2023-11-19T18:04:55Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:54:59Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_small_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5348837209302325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4338
- Accuracy: 0.5349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1632 | 1.0 | 28 | 2.6011 | 0.2558 |
| 1.512 | 2.0 | 56 | 1.9238 | 0.2558 |
| 1.4664 | 3.0 | 84 | 1.5930 | 0.2558 |
| 1.4243 | 4.0 | 112 | 1.6311 | 0.2558 |
| 1.4308 | 5.0 | 140 | 1.5023 | 0.2326 |
| 1.3985 | 6.0 | 168 | 1.3885 | 0.2326 |
| 1.6118 | 7.0 | 196 | 1.8250 | 0.2326 |
| 1.4607 | 8.0 | 224 | 1.4482 | 0.2558 |
| 1.4254 | 9.0 | 252 | 1.5210 | 0.2326 |
| 1.2281 | 10.0 | 280 | 1.2713 | 0.2791 |
| 1.1707 | 11.0 | 308 | 1.6980 | 0.3256 |
| 1.1948 | 12.0 | 336 | 1.3889 | 0.3488 |
| 1.0995 | 13.0 | 364 | 1.2122 | 0.4651 |
| 1.0119 | 14.0 | 392 | 1.2109 | 0.3721 |
| 1.025 | 15.0 | 420 | 1.1189 | 0.4419 |
| 0.9953 | 16.0 | 448 | 1.0970 | 0.5581 |
| 1.0322 | 17.0 | 476 | 1.1852 | 0.5581 |
| 1.0805 | 18.0 | 504 | 1.3503 | 0.4651 |
| 1.0129 | 19.0 | 532 | 1.0139 | 0.5581 |
| 0.8769 | 20.0 | 560 | 1.2502 | 0.5349 |
| 0.9527 | 21.0 | 588 | 0.9400 | 0.6977 |
| 0.8714 | 22.0 | 616 | 0.9462 | 0.6744 |
| 0.8727 | 23.0 | 644 | 1.1395 | 0.4419 |
| 0.8037 | 24.0 | 672 | 0.9359 | 0.5814 |
| 0.7753 | 25.0 | 700 | 0.7772 | 0.6047 |
| 0.8041 | 26.0 | 728 | 0.7536 | 0.6744 |
| 0.8222 | 27.0 | 756 | 1.0294 | 0.4186 |
| 0.7867 | 28.0 | 784 | 1.0146 | 0.6512 |
| 0.7746 | 29.0 | 812 | 1.1197 | 0.5116 |
| 0.6826 | 30.0 | 840 | 0.8534 | 0.6977 |
| 0.6952 | 31.0 | 868 | 0.9094 | 0.5814 |
| 0.7133 | 32.0 | 896 | 0.7819 | 0.6047 |
| 0.6818 | 33.0 | 924 | 0.8848 | 0.6977 |
| 0.634 | 34.0 | 952 | 1.0225 | 0.6047 |
| 0.7437 | 35.0 | 980 | 0.9642 | 0.5349 |
| 0.6195 | 36.0 | 1008 | 1.1344 | 0.6047 |
| 0.6464 | 37.0 | 1036 | 1.0624 | 0.4186 |
| 0.5946 | 38.0 | 1064 | 1.1057 | 0.5116 |
| 0.5887 | 39.0 | 1092 | 1.0910 | 0.6512 |
| 0.6287 | 40.0 | 1120 | 1.0898 | 0.5581 |
| 0.5714 | 41.0 | 1148 | 1.2124 | 0.5349 |
| 0.5356 | 42.0 | 1176 | 1.2782 | 0.5116 |
| 0.4544 | 43.0 | 1204 | 1.1905 | 0.5814 |
| 0.3966 | 44.0 | 1232 | 1.4293 | 0.5349 |
| 0.3676 | 45.0 | 1260 | 1.3361 | 0.5581 |
| 0.3673 | 46.0 | 1288 | 1.3624 | 0.5349 |
| 0.3108 | 47.0 | 1316 | 1.3804 | 0.5581 |
| 0.2776 | 48.0 | 1344 | 1.4296 | 0.5349 |
| 0.2985 | 49.0 | 1372 | 1.4338 | 0.5349 |
| 0.271 | 50.0 | 1400 | 1.4338 | 0.5349 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Nikhil058/Taxi_QL | Nikhil058 | 2023-11-19T17:58:53Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T17:58:51Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_QL
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Nikhil058/Taxi_QL", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheBloke/XwinCoder-13B-GGUF | TheBloke | 2023-11-19T17:58:50Z | 327 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Xwin-LM/XwinCoder-13B",
"base_model:quantized:Xwin-LM/XwinCoder-13B",
"license:llama2",
"region:us"
]
| null | 2023-11-19T16:47:36Z | ---
base_model: Xwin-LM/XwinCoder-13B
inference: false
license: llama2
model_creator: Xwin-LM
model_name: XwinCoder 13B
model_type: llama
prompt_template: "<system>: You are an AI coding assistant that helps people with\
\ programming. Write a response that appropriately completes the user's request.\n\
<user>: {prompt}\n<AI>: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# XwinCoder 13B - GGUF
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Xwin-LM's XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/XwinCoder-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/XwinCoder-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: XWin-Coder
```
<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [xwincoder-13b.Q2_K.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [xwincoder-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [xwincoder-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [xwincoder-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [xwincoder-13b.Q4_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xwincoder-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [xwincoder-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [xwincoder-13b.Q5_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xwincoder-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [xwincoder-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [xwincoder-13b.Q6_K.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [xwincoder-13b.Q8_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/XwinCoder-13B-GGUF and below it, a specific filename to download, such as: xwincoder-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/XwinCoder-13B-GGUF xwincoder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/XwinCoder-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/XwinCoder-13B-GGUF xwincoder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m xwincoder-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.\n<user>: {prompt}\n<AI>:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/XwinCoder-13B-GGUF", model_file="xwincoder-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Xwin-LM's XwinCoder 13B
# XwinCoder
We are glad to introduce our instruction finetuned code generation models based on CodeLLaMA: XwinCoder. We release model weights and evaluation code.
**Repository:** [https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder)
**Models:**
| Model | 🤗hf link | HumanEval pass@1 | MBPP pass@1 | APPS-intro pass@5 |
|-------|------------|----------|------|-------------|
| XwinCoder-7B | [link](https://huggingface.co/Xwin-LM/XwinCoder-7B) | 63.8 | 57.4 | 31.5 |
| XwinCoder-13B | [link](https://huggingface.co/Xwin-LM/XwinCoder-13B) | 68.8 | 60.1 | 35.4 |
| XwinCoder-34B | [link](https://huggingface.co/Xwin-LM/XwinCoder-34B) | 74.2 | 64.8 | 43.0 |
## Updates
- 💥 We released [**XwinCoder-7B**](https://huggingface.co/Xwin-LM/XwinCoder-7B), [**XwinCoder-13B**](https://huggingface.co/Xwin-LM/XwinCoder-13B), [**XwinCoder-34B**](https://huggingface.co/Xwin-LM/XwinCoder-34B). Our XwinCoder-34B reached 74.2 on HumanEval and it **achieves comparable performance as GPT-3.5-turbo on 6 benchmarks**.
- ❗We support evaluating instruction finetuned models on HumanEval, MBPP, APPS, DS1000 and MT-Bench. See our github repository.
-
## Overview

* To fully demonstrate our model's coding capabilities in real-world usage scenarios, we have conducted thorough evaluations on several existing mainstream coding capability leaderboards (rather than only on the currently most popular HumanEval).
* As shown in the radar chart results, our 34B model **achieves comparable performance as GPT-3.5-turbo on coding abilities**.
* It is worth mentioning that, to ensure accurate visualization, our radar chart has not been scaled (only translated; MT-Bench score is scaled by 10x to be more comparable with other benchmarks).
* Multiple-E-avg6 refer to the 6 languages used in CodeLLaMA paper. Results of GPT-4 and GPT-3.5-turbo are conducted by us, more details will be released later.
## Demo
We provide a chat demo in our github repository, here are some examples:

<!-- original-model-card end -->
|
TheBloke/XwinCoder-13B-AWQ | TheBloke | 2023-11-19T17:57:34Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Xwin-LM/XwinCoder-13B",
"base_model:quantized:Xwin-LM/XwinCoder-13B",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-19T16:47:36Z | ---
base_model: Xwin-LM/XwinCoder-13B
inference: false
license: llama2
model_creator: Xwin-LM
model_name: XwinCoder 13B
model_type: llama
prompt_template: "<system>: You are an AI coding assistant that helps people with\
\ programming. Write a response that appropriately completes the user's request.\n\
<user>: {prompt}\n<AI>: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# XwinCoder 13B - AWQ
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Xwin-LM's XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/XwinCoder-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/XwinCoder-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: XWin-Coder
```
<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/XwinCoder-13B-AWQ/tree/main) | 4 | 128 | [code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/XwinCoder-13B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `XwinCoder-13B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/XwinCoder-13B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/XwinCoder-13B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/XwinCoder-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/XwinCoder-13B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Xwin-LM's XwinCoder 13B
# XwinCoder
We are glad to introduce our instruction finetuned code generation models based on CodeLLaMA: XwinCoder. We release model weights and evaluation code.
**Repository:** [https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder)
**Models:**
| Model | 🤗hf link | HumanEval pass@1 | MBPP pass@1 | APPS-intro pass@5 |
|-------|------------|----------|------|-------------|
| XwinCoder-7B | [link](https://huggingface.co/Xwin-LM/XwinCoder-7B) | 63.8 | 57.4 | 31.5 |
| XwinCoder-13B | [link](https://huggingface.co/Xwin-LM/XwinCoder-13B) | 68.8 | 60.1 | 35.4 |
| XwinCoder-34B | [link](https://huggingface.co/Xwin-LM/XwinCoder-34B) | 74.2 | 64.8 | 43.0 |
## Updates
- 💥 We released [**XwinCoder-7B**](https://huggingface.co/Xwin-LM/XwinCoder-7B), [**XwinCoder-13B**](https://huggingface.co/Xwin-LM/XwinCoder-13B), [**XwinCoder-34B**](https://huggingface.co/Xwin-LM/XwinCoder-34B). Our XwinCoder-34B reached 74.2 on HumanEval and it **achieves comparable performance as GPT-3.5-turbo on 6 benchmarks**.
- ❗We support evaluating instruction finetuned models on HumanEval, MBPP, APPS, DS1000 and MT-Bench. See our github repository.
-
## Overview

* To fully demonstrate our model's coding capabilities in real-world usage scenarios, we have conducted thorough evaluations on several existing mainstream coding capability leaderboards (rather than only on the currently most popular HumanEval).
* As shown in the radar chart results, our 34B model **achieves comparable performance as GPT-3.5-turbo on coding abilities**.
* It is worth mentioning that, to ensure accurate visualization, our radar chart has not been scaled (only translated; MT-Bench score is scaled by 10x to be more comparable with other benchmarks).
* Multiple-E-avg6 refer to the 6 languages used in CodeLLaMA paper. Results of GPT-4 and GPT-3.5-turbo are conducted by us, more details will be released later.
## Demo
We provide a chat demo in our github repository, here are some examples:

|
vpr30/newspaper-qa | vpr30 | 2023-11-19T17:56:45Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/tinyroberta-squad2",
"base_model:finetune:deepset/tinyroberta-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-11-15T06:38:04Z | ---
license: cc-by-4.0
base_model: deepset/tinyroberta-squad2
tags:
- generated_from_trainer
model-index:
- name: newspaper-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newspaper-qa
This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 345 | 0.9964 |
| 0.0292 | 2.0 | 690 | 0.7626 |
| 0.0163 | 3.0 | 1035 | 0.7961 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_small_rms_001_fold2 | hkivancoral | 2023-11-19T17:54:40Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:44:49Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_small_rms_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.35555555555555557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6556
- Accuracy: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0324 | 1.0 | 27 | 1.4564 | 0.2444 |
| 1.4819 | 2.0 | 54 | 1.4327 | 0.2444 |
| 1.4504 | 3.0 | 81 | 1.4455 | 0.2667 |
| 1.4703 | 4.0 | 108 | 1.5353 | 0.2444 |
| 1.4319 | 5.0 | 135 | 1.4161 | 0.2444 |
| 1.4127 | 6.0 | 162 | 1.4083 | 0.2444 |
| 1.424 | 7.0 | 189 | 1.4264 | 0.2667 |
| 1.3928 | 8.0 | 216 | 1.4087 | 0.2889 |
| 1.4183 | 9.0 | 243 | 1.3797 | 0.2667 |
| 1.2937 | 10.0 | 270 | 1.5479 | 0.3333 |
| 1.444 | 11.0 | 297 | 1.4212 | 0.2667 |
| 1.2489 | 12.0 | 324 | 1.3827 | 0.3333 |
| 1.2092 | 13.0 | 351 | 1.4109 | 0.3333 |
| 1.1924 | 14.0 | 378 | 1.3647 | 0.3556 |
| 1.1322 | 15.0 | 405 | 1.4486 | 0.4 |
| 1.059 | 16.0 | 432 | 1.3236 | 0.2889 |
| 1.007 | 17.0 | 459 | 1.5059 | 0.3778 |
| 1.0396 | 18.0 | 486 | 1.8214 | 0.3778 |
| 0.9935 | 19.0 | 513 | 1.6035 | 0.2222 |
| 0.9595 | 20.0 | 540 | 1.8699 | 0.3111 |
| 0.9315 | 21.0 | 567 | 1.9455 | 0.2889 |
| 0.9127 | 22.0 | 594 | 1.9720 | 0.1778 |
| 0.9141 | 23.0 | 621 | 1.8863 | 0.4222 |
| 0.8941 | 24.0 | 648 | 2.4630 | 0.2444 |
| 0.861 | 25.0 | 675 | 2.3990 | 0.2 |
| 0.8474 | 26.0 | 702 | 2.1204 | 0.3556 |
| 0.7937 | 27.0 | 729 | 2.7394 | 0.3556 |
| 0.7958 | 28.0 | 756 | 2.5648 | 0.2 |
| 0.7373 | 29.0 | 783 | 2.5253 | 0.3778 |
| 0.7358 | 30.0 | 810 | 2.5059 | 0.3778 |
| 0.691 | 31.0 | 837 | 2.3895 | 0.4222 |
| 0.7103 | 32.0 | 864 | 2.5414 | 0.4222 |
| 0.6539 | 33.0 | 891 | 3.0204 | 0.3333 |
| 0.6275 | 34.0 | 918 | 2.6245 | 0.3778 |
| 0.5921 | 35.0 | 945 | 3.2133 | 0.2667 |
| 0.5912 | 36.0 | 972 | 3.5251 | 0.2667 |
| 0.5547 | 37.0 | 999 | 3.3775 | 0.2889 |
| 0.4976 | 38.0 | 1026 | 3.1294 | 0.4 |
| 0.4303 | 39.0 | 1053 | 3.2846 | 0.3778 |
| 0.3956 | 40.0 | 1080 | 3.2354 | 0.4444 |
| 0.3999 | 41.0 | 1107 | 3.0834 | 0.4667 |
| 0.3745 | 42.0 | 1134 | 3.3561 | 0.3333 |
| 0.3219 | 43.0 | 1161 | 3.3246 | 0.3333 |
| 0.2571 | 44.0 | 1188 | 3.4952 | 0.3556 |
| 0.2544 | 45.0 | 1215 | 3.6528 | 0.3778 |
| 0.2048 | 46.0 | 1242 | 3.6814 | 0.3333 |
| 0.2017 | 47.0 | 1269 | 3.5396 | 0.3778 |
| 0.1409 | 48.0 | 1296 | 3.6629 | 0.3556 |
| 0.1528 | 49.0 | 1323 | 3.6556 | 0.3556 |
| 0.122 | 50.0 | 1350 | 3.6556 | 0.3556 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kamalp99/q-FrozenLake-v1-4x4-noSlippery | kamalp99 | 2023-11-19T17:49:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T17:49:02Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kamalp99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Divyanshh/Bloom-560M-Story-generator | Divyanshh | 2023-11-19T17:47:29Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
]
| null | 2023-11-19T16:42:06Z | ---
library_name: peft
base_model: bigscience/bloom-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
davidgaofc/TechDebtLabeler | davidgaofc | 2023-11-19T17:43:33Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-13T04:40:53Z | ---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
model-index:
- name: training
results: []
---
# training
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on [a dataset created from The Technical Debt Dataset](https://huggingface.co/datasets/davidgaofc/techdebt_label).
# dataset citation
Valentina Lenarduzzi, Nyyti Saarimäki, Davide Taibi. The Technical Debt Dataset. Proceedings for the 15th Conference on Predictive Models and Data Analytics in Software Engineering. Brazil. 2019.
## Model description
Generates descriptions of git commits which have code smells which possibly signify technical debt.
## Intended uses & limitations
Use with caution. Limited by small training set and limited variety of training set labels. Improvements in progress.
## Training procedure
one epoch of training on the dataset referred to above
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 100
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Nikhil058/q-FrozenLake-v1-4x4-noSlippery | Nikhil058 | 2023-11-19T17:41:02Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T17:40:59Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.65 +/- 0.48
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Nikhil058/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
anycores/whisper_tiny_v1.1_intel | anycores | 2023-11-19T17:39:22Z | 0 | 0 | null | [
"code",
"audio",
"acceleration",
"network",
"license:mit",
"region:us"
]
| null | 2023-09-29T19:01:34Z | ---
license: mit
tags:
- code
- audio
- acceleration
- network
---
## Overview
This is an implementation of whisper from scratch in C++.
This is a proof-of-concept. Further modifications, imporvements are coming.
Feedbacks are wellcomed in the corresponding github repository, [precompAId](https://github.com/anycores/precompAId).
Binary contains:
* exe for testing the app quickly
* header and dll for building custom solutions
* main.cpp as an example, how to use the header (the exe compiled from this)
* weights.xdf (required to load into the graph, no other input required)
* audios folder, containing examples to try the application
* convert.py for creating the right input for the application from and arbitrary audio file
Versions:
* for windows there are 4 compiled versions
* all versions corresponds to the level of available instruction sets
* avx512: requires avx512F, avx512BW, avx512VL and FMA
* avx2: requires avx2 and FMA
* sse: requires sse4.1
* default: requires no intrinsic related cpu features
## Quick start
Example for the usage of whisper.exe (dll should be discoverable by the exe):
```
whisper.exe weights.xgdf audios\voice_example1.pb
```
Example compilation (with clang from the root):
```
clang++ main.cpp win64\whisper.lib -o whisper.exe
```
Example for converting:
```
python convert.py --ipath audios\voice_example_orig1.wav --opath voice_example.pb
```
## Implementation info
Tested on:
* windows 11 and ubuntu20
* intel i7 11th gen
* clang 16.06 as compiler
Current properties:
* fp32
* [this tool](https://github.com/archspec/archspec) can help enlisting the available cpu features for selecting the right library version
## Further Notes
Improved versions will arrive regularly.
Feedbacks are wellcomed. Especially the following:
* features to be add (input format, expected output format etc.)
* devices (plan to extend for mobiles, IPUs etc.)
* models (what other models would be great to accelerate) |
mubashirsaeed/care-bot-harry-potter-falcon-7b-4 | mubashirsaeed | 2023-11-19T17:34:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"region:us"
]
| null | 2023-11-19T17:34:08Z | ---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
Akshay0706/All-Plants-18-Epochs-Model | Akshay0706 | 2023-11-19T17:33:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:33:32Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
- f1
model-index:
- name: All-Plants-18-Epochs-Model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
config: Dataset
split: train
args: Dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.9847645429362881
- name: F1
type: f1
value: 0.984922643975302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# All-Plants-18-Epochs-Model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0888
- Accuracy: 0.9848
- F1: 0.9849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9212 | 1.0 | 407 | 0.3931 | 0.9501 | 0.9579 |
| 0.2659 | 2.0 | 814 | 0.2176 | 0.9668 | 0.9674 |
| 0.137 | 3.0 | 1221 | 0.1481 | 0.9723 | 0.9731 |
| 0.0865 | 4.0 | 1628 | 0.1043 | 0.9834 | 0.9836 |
| 0.0557 | 5.0 | 2035 | 0.0888 | 0.9848 | 0.9849 |
| 0.0408 | 6.0 | 2442 | 0.0839 | 0.9848 | 0.9848 |
| 0.0289 | 7.0 | 2849 | 0.0920 | 0.9848 | 0.9849 |
| 0.0229 | 8.0 | 3256 | 0.0817 | 0.9834 | 0.9837 |
| 0.0175 | 9.0 | 3663 | 0.0890 | 0.9820 | 0.9823 |
| 0.0156 | 10.0 | 4070 | 0.0966 | 0.9820 | 0.9823 |
| 0.0121 | 11.0 | 4477 | 0.0809 | 0.9834 | 0.9837 |
| 0.0102 | 12.0 | 4884 | 0.0875 | 0.9820 | 0.9823 |
| 0.0086 | 13.0 | 5291 | 0.0873 | 0.9820 | 0.9823 |
| 0.0077 | 14.0 | 5698 | 0.0860 | 0.9820 | 0.9823 |
| 0.0068 | 15.0 | 6105 | 0.0876 | 0.9820 | 0.9823 |
| 0.0062 | 16.0 | 6512 | 0.0896 | 0.9820 | 0.9823 |
| 0.0059 | 17.0 | 6919 | 0.0890 | 0.9820 | 0.9823 |
| 0.0056 | 18.0 | 7326 | 0.0894 | 0.9820 | 0.9823 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
Lichang-Chen/zephyr-7b-sft-lora | Lichang-Chen | 2023-11-19T17:32:41Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"region:us"
]
| null | 2023-11-19T16:02:08Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 128
- total_train_batch_size: 4096
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8999 | 0.99 | 15 | 0.8835 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_5x_deit_tiny_sgd_001_fold5 | hkivancoral | 2023-11-19T17:31:44Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:24:48Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5121951219512195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0480
- Accuracy: 0.5122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4966 | 1.0 | 28 | 1.5748 | 0.2439 |
| 1.363 | 2.0 | 56 | 1.4510 | 0.2927 |
| 1.3445 | 3.0 | 84 | 1.3731 | 0.3902 |
| 1.2909 | 4.0 | 112 | 1.3148 | 0.3902 |
| 1.2782 | 5.0 | 140 | 1.2775 | 0.4146 |
| 1.2431 | 6.0 | 168 | 1.2527 | 0.4146 |
| 1.1698 | 7.0 | 196 | 1.2349 | 0.4634 |
| 1.1766 | 8.0 | 224 | 1.2144 | 0.4634 |
| 1.17 | 9.0 | 252 | 1.1948 | 0.4634 |
| 1.1062 | 10.0 | 280 | 1.1764 | 0.4390 |
| 1.0601 | 11.0 | 308 | 1.1840 | 0.4634 |
| 1.0566 | 12.0 | 336 | 1.1703 | 0.4634 |
| 1.0478 | 13.0 | 364 | 1.1443 | 0.4634 |
| 1.0482 | 14.0 | 392 | 1.1542 | 0.4634 |
| 1.0161 | 15.0 | 420 | 1.1465 | 0.4634 |
| 1.0335 | 16.0 | 448 | 1.1434 | 0.4634 |
| 0.9719 | 17.0 | 476 | 1.1475 | 0.4634 |
| 0.9588 | 18.0 | 504 | 1.1439 | 0.4634 |
| 1.0081 | 19.0 | 532 | 1.1431 | 0.4634 |
| 0.973 | 20.0 | 560 | 1.1304 | 0.4878 |
| 0.94 | 21.0 | 588 | 1.1093 | 0.4878 |
| 0.8982 | 22.0 | 616 | 1.1184 | 0.4878 |
| 0.9204 | 23.0 | 644 | 1.1332 | 0.4634 |
| 0.8435 | 24.0 | 672 | 1.1088 | 0.4878 |
| 0.8736 | 25.0 | 700 | 1.0913 | 0.4878 |
| 0.846 | 26.0 | 728 | 1.0897 | 0.4878 |
| 0.8446 | 27.0 | 756 | 1.0809 | 0.4878 |
| 0.8745 | 28.0 | 784 | 1.0794 | 0.4878 |
| 0.8251 | 29.0 | 812 | 1.0765 | 0.5122 |
| 0.8547 | 30.0 | 840 | 1.0870 | 0.4878 |
| 0.7939 | 31.0 | 868 | 1.0770 | 0.4878 |
| 0.7828 | 32.0 | 896 | 1.0780 | 0.4878 |
| 0.8106 | 33.0 | 924 | 1.0700 | 0.5122 |
| 0.784 | 34.0 | 952 | 1.0593 | 0.5122 |
| 0.7795 | 35.0 | 980 | 1.0615 | 0.4878 |
| 0.8007 | 36.0 | 1008 | 1.0592 | 0.4878 |
| 0.726 | 37.0 | 1036 | 1.0594 | 0.4878 |
| 0.7657 | 38.0 | 1064 | 1.0523 | 0.4878 |
| 0.7942 | 39.0 | 1092 | 1.0544 | 0.4878 |
| 0.7485 | 40.0 | 1120 | 1.0497 | 0.5122 |
| 0.7752 | 41.0 | 1148 | 1.0549 | 0.5122 |
| 0.7115 | 42.0 | 1176 | 1.0535 | 0.4878 |
| 0.7477 | 43.0 | 1204 | 1.0497 | 0.5122 |
| 0.769 | 44.0 | 1232 | 1.0484 | 0.5122 |
| 0.7292 | 45.0 | 1260 | 1.0496 | 0.5122 |
| 0.7475 | 46.0 | 1288 | 1.0482 | 0.5122 |
| 0.7629 | 47.0 | 1316 | 1.0480 | 0.5122 |
| 0.8 | 48.0 | 1344 | 1.0480 | 0.5122 |
| 0.7301 | 49.0 | 1372 | 1.0480 | 0.5122 |
| 0.738 | 50.0 | 1400 | 1.0480 | 0.5122 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_tiny_sgd_001_fold4 | hkivancoral | 2023-11-19T17:24:31Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:17:31Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5238095238095238
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0007
- Accuracy: 0.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4776 | 1.0 | 28 | 1.5069 | 0.2857 |
| 1.3802 | 2.0 | 56 | 1.4145 | 0.3571 |
| 1.3245 | 3.0 | 84 | 1.3548 | 0.3571 |
| 1.3009 | 4.0 | 112 | 1.3137 | 0.4286 |
| 1.2628 | 5.0 | 140 | 1.2827 | 0.4524 |
| 1.2443 | 6.0 | 168 | 1.2514 | 0.5238 |
| 1.1651 | 7.0 | 196 | 1.2261 | 0.5238 |
| 1.1485 | 8.0 | 224 | 1.2037 | 0.5238 |
| 1.1029 | 9.0 | 252 | 1.1805 | 0.5238 |
| 1.0945 | 10.0 | 280 | 1.1607 | 0.5238 |
| 1.1057 | 11.0 | 308 | 1.1451 | 0.5476 |
| 1.0601 | 12.0 | 336 | 1.1295 | 0.5476 |
| 1.0375 | 13.0 | 364 | 1.1248 | 0.5476 |
| 1.024 | 14.0 | 392 | 1.1065 | 0.5952 |
| 0.9777 | 15.0 | 420 | 1.0997 | 0.5952 |
| 0.9798 | 16.0 | 448 | 1.0984 | 0.5952 |
| 0.9759 | 17.0 | 476 | 1.0858 | 0.5952 |
| 0.9492 | 18.0 | 504 | 1.0744 | 0.5476 |
| 0.911 | 19.0 | 532 | 1.0716 | 0.5952 |
| 0.9409 | 20.0 | 560 | 1.0622 | 0.5476 |
| 0.8706 | 21.0 | 588 | 1.0578 | 0.5476 |
| 0.9232 | 22.0 | 616 | 1.0547 | 0.5952 |
| 0.8639 | 23.0 | 644 | 1.0468 | 0.5 |
| 0.9013 | 24.0 | 672 | 1.0442 | 0.5238 |
| 0.8242 | 25.0 | 700 | 1.0432 | 0.5238 |
| 0.8379 | 26.0 | 728 | 1.0386 | 0.5238 |
| 0.8656 | 27.0 | 756 | 1.0271 | 0.5238 |
| 0.8539 | 28.0 | 784 | 1.0232 | 0.5 |
| 0.831 | 29.0 | 812 | 1.0228 | 0.5238 |
| 0.7984 | 30.0 | 840 | 1.0256 | 0.5 |
| 0.8188 | 31.0 | 868 | 1.0204 | 0.5 |
| 0.8337 | 32.0 | 896 | 1.0202 | 0.5 |
| 0.7879 | 33.0 | 924 | 1.0178 | 0.5 |
| 0.7864 | 34.0 | 952 | 1.0219 | 0.5238 |
| 0.8414 | 35.0 | 980 | 1.0150 | 0.5238 |
| 0.8067 | 36.0 | 1008 | 1.0140 | 0.5238 |
| 0.7647 | 37.0 | 1036 | 1.0119 | 0.5238 |
| 0.7807 | 38.0 | 1064 | 1.0087 | 0.5238 |
| 0.7751 | 39.0 | 1092 | 1.0072 | 0.5238 |
| 0.7728 | 40.0 | 1120 | 1.0064 | 0.5238 |
| 0.7814 | 41.0 | 1148 | 1.0052 | 0.5238 |
| 0.7361 | 42.0 | 1176 | 1.0026 | 0.5238 |
| 0.7838 | 43.0 | 1204 | 1.0019 | 0.5238 |
| 0.7388 | 44.0 | 1232 | 1.0012 | 0.5238 |
| 0.7605 | 45.0 | 1260 | 1.0006 | 0.5238 |
| 0.7578 | 46.0 | 1288 | 1.0005 | 0.5238 |
| 0.7479 | 47.0 | 1316 | 1.0010 | 0.5238 |
| 0.7186 | 48.0 | 1344 | 1.0007 | 0.5238 |
| 0.7471 | 49.0 | 1372 | 1.0007 | 0.5238 |
| 0.7354 | 50.0 | 1400 | 1.0007 | 0.5238 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Anwaarma/Improved-bert-multilingual | Anwaarma | 2023-11-19T17:20:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T17:17:37Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-bert-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-bert-multilingual
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0118
- Accuracy: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6856 | 0.55 | 50 | 0.6607 | 0.61 |
| 0.5729 | 1.1 | 100 | 0.5453 | 0.73 |
| 0.4649 | 1.65 | 150 | 0.5915 | 0.66 |
| 0.385 | 2.2 | 200 | 0.5824 | 0.7 |
| 0.343 | 2.75 | 250 | 0.4548 | 0.79 |
| 0.305 | 3.3 | 300 | 0.6585 | 0.71 |
| 0.2355 | 3.85 | 350 | 0.6034 | 0.77 |
| 0.2176 | 4.4 | 400 | 0.5191 | 0.79 |
| 0.2137 | 4.95 | 450 | 0.6655 | 0.73 |
| 0.181 | 5.49 | 500 | 0.6929 | 0.78 |
| 0.171 | 6.04 | 550 | 1.0172 | 0.65 |
| 0.1267 | 6.59 | 600 | 0.9904 | 0.67 |
| 0.1152 | 7.14 | 650 | 1.0817 | 0.65 |
| 0.1045 | 7.69 | 700 | 1.1231 | 0.66 |
| 0.0973 | 8.24 | 750 | 1.0118 | 0.78 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Anwaarma/Improved-bert-multilingual-nodropout | Anwaarma | 2023-11-19T17:17:35Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T17:14:49Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-bert-multilingual-nodropout
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-bert-multilingual-nodropout
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0144
- Accuracy: 0.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6855 | 0.55 | 50 | 0.6606 | 0.61 |
| 0.5737 | 1.1 | 100 | 0.5402 | 0.74 |
| 0.4677 | 1.65 | 150 | 0.5966 | 0.67 |
| 0.3866 | 2.2 | 200 | 0.5778 | 0.69 |
| 0.3414 | 2.75 | 250 | 0.4483 | 0.8 |
| 0.3047 | 3.3 | 300 | 0.7158 | 0.68 |
| 0.2366 | 3.85 | 350 | 0.6137 | 0.77 |
| 0.2188 | 4.4 | 400 | 0.5294 | 0.77 |
| 0.2191 | 4.95 | 450 | 0.6734 | 0.73 |
| 0.1935 | 5.49 | 500 | 0.6588 | 0.8 |
| 0.1707 | 6.04 | 550 | 1.0354 | 0.65 |
| 0.1216 | 6.59 | 600 | 0.9658 | 0.69 |
| 0.1093 | 7.14 | 650 | 1.0317 | 0.69 |
| 0.1099 | 7.69 | 700 | 0.9484 | 0.71 |
| 0.1128 | 8.24 | 750 | 1.0144 | 0.76 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Anwaarma/Improved-Arabic-bert-base | Anwaarma | 2023-11-19T17:09:57Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:asafaya/bert-base-arabic",
"base_model:finetune:asafaya/bert-base-arabic",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T17:07:27Z | ---
base_model: asafaya/bert-base-arabic
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-Arabic-bert-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-Arabic-bert-base
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7595
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5197 | 0.55 | 50 | 0.3977 | 0.8 |
| 0.3323 | 1.1 | 100 | 0.3298 | 0.86 |
| 0.2844 | 1.65 | 150 | 0.3401 | 0.84 |
| 0.2128 | 2.2 | 200 | 0.4569 | 0.8 |
| 0.1539 | 2.75 | 250 | 0.4315 | 0.83 |
| 0.1346 | 3.3 | 300 | 0.5178 | 0.81 |
| 0.0933 | 3.85 | 350 | 0.5167 | 0.84 |
| 0.0641 | 4.4 | 400 | 0.6903 | 0.82 |
| 0.0698 | 4.95 | 450 | 0.5628 | 0.85 |
| 0.028 | 5.49 | 500 | 0.6472 | 0.86 |
| 0.0449 | 6.04 | 550 | 0.6739 | 0.85 |
| 0.0133 | 6.59 | 600 | 0.6925 | 0.84 |
| 0.0177 | 7.14 | 650 | 0.6716 | 0.87 |
| 0.0209 | 7.69 | 700 | 0.6644 | 0.89 |
| 0.0226 | 8.24 | 750 | 0.7650 | 0.84 |
| 0.0137 | 8.79 | 800 | 0.8186 | 0.86 |
| 0.0164 | 9.34 | 850 | 0.7771 | 0.86 |
| 0.006 | 9.89 | 900 | 0.7805 | 0.85 |
| 0.0069 | 10.44 | 950 | 0.7595 | 0.86 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
hkivancoral/hushem_5x_deit_tiny_sgd_001_fold2 | hkivancoral | 2023-11-19T17:09:56Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T17:03:00Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3111111111111111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5326
- Accuracy: 0.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5319 | 1.0 | 27 | 1.6147 | 0.1778 |
| 1.412 | 2.0 | 54 | 1.5641 | 0.2222 |
| 1.3305 | 3.0 | 81 | 1.5365 | 0.2 |
| 1.301 | 4.0 | 108 | 1.5559 | 0.2222 |
| 1.2455 | 5.0 | 135 | 1.5605 | 0.2444 |
| 1.184 | 6.0 | 162 | 1.5721 | 0.2444 |
| 1.1536 | 7.0 | 189 | 1.5847 | 0.2444 |
| 1.141 | 8.0 | 216 | 1.6070 | 0.2667 |
| 1.0813 | 9.0 | 243 | 1.6240 | 0.2667 |
| 1.0544 | 10.0 | 270 | 1.6212 | 0.2667 |
| 1.0306 | 11.0 | 297 | 1.6262 | 0.2667 |
| 0.9926 | 12.0 | 324 | 1.6270 | 0.2667 |
| 0.9991 | 13.0 | 351 | 1.6433 | 0.2444 |
| 0.9662 | 14.0 | 378 | 1.6269 | 0.2667 |
| 0.9752 | 15.0 | 405 | 1.6379 | 0.2444 |
| 0.9275 | 16.0 | 432 | 1.6386 | 0.2444 |
| 0.9112 | 17.0 | 459 | 1.6378 | 0.2667 |
| 0.8926 | 18.0 | 486 | 1.6345 | 0.2667 |
| 0.8698 | 19.0 | 513 | 1.6300 | 0.2444 |
| 0.8732 | 20.0 | 540 | 1.6217 | 0.2444 |
| 0.8587 | 21.0 | 567 | 1.6212 | 0.2667 |
| 0.8545 | 22.0 | 594 | 1.6207 | 0.2667 |
| 0.8339 | 23.0 | 621 | 1.6201 | 0.2444 |
| 0.8104 | 24.0 | 648 | 1.6072 | 0.2667 |
| 0.7957 | 25.0 | 675 | 1.6070 | 0.2667 |
| 0.8197 | 26.0 | 702 | 1.6043 | 0.2444 |
| 0.8076 | 27.0 | 729 | 1.6022 | 0.2667 |
| 0.7686 | 28.0 | 756 | 1.5925 | 0.2889 |
| 0.7691 | 29.0 | 783 | 1.5965 | 0.2889 |
| 0.7835 | 30.0 | 810 | 1.5836 | 0.2889 |
| 0.7441 | 31.0 | 837 | 1.5828 | 0.2889 |
| 0.7775 | 32.0 | 864 | 1.5709 | 0.2889 |
| 0.7317 | 33.0 | 891 | 1.5664 | 0.2889 |
| 0.7292 | 34.0 | 918 | 1.5626 | 0.2889 |
| 0.7179 | 35.0 | 945 | 1.5496 | 0.2667 |
| 0.7386 | 36.0 | 972 | 1.5502 | 0.2889 |
| 0.7342 | 37.0 | 999 | 1.5475 | 0.3111 |
| 0.734 | 38.0 | 1026 | 1.5457 | 0.3111 |
| 0.7069 | 39.0 | 1053 | 1.5425 | 0.3111 |
| 0.7143 | 40.0 | 1080 | 1.5429 | 0.3111 |
| 0.7105 | 41.0 | 1107 | 1.5401 | 0.3111 |
| 0.7189 | 42.0 | 1134 | 1.5394 | 0.3111 |
| 0.7216 | 43.0 | 1161 | 1.5376 | 0.3111 |
| 0.6896 | 44.0 | 1188 | 1.5358 | 0.3111 |
| 0.7099 | 45.0 | 1215 | 1.5345 | 0.3111 |
| 0.6751 | 46.0 | 1242 | 1.5331 | 0.3111 |
| 0.6824 | 47.0 | 1269 | 1.5327 | 0.3111 |
| 0.7027 | 48.0 | 1296 | 1.5326 | 0.3111 |
| 0.7357 | 49.0 | 1323 | 1.5326 | 0.3111 |
| 0.6799 | 50.0 | 1350 | 1.5326 | 0.3111 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rmhirota/model_dir | rmhirota | 2023-11-19T17:08:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-31T02:55:29Z | ---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
model-index:
- name: model_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_dir
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
feynman-integrals-nn/t331ZZZM | feynman-integrals-nn | 2023-11-19T17:07:47Z | 6 | 0 | transformers | [
"transformers",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-10-17T17:50:47Z | ---
license: cc-by-4.0
---
# t331ZZZM
* [model](https://huggingface.co/feynman-integrals-nn/t331ZZZM)
* [data](https://huggingface.co/datasets/feynman-integrals-nn/t331ZZZM)
* [source](https://gitlab.com/feynman-integrals-nn/feynman-integrals-nn/-/tree/main/t331ZZZM)
|
jafetsierra/output | jafetsierra | 2023-11-19T17:02:41Z | 3 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-11-18T03:42:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jafetsierra/output
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
LoneStriker/Yi-34B-Spicyboros-3.1-3-4.0bpw-h6-exl2 | LoneStriker | 2023-11-19T17:00:45Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:unalignment/spicy-3.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T16:50:00Z | ---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- unalignment/spicy-3.1
---
# Fine-tune of Y-34B with Spicyboros-3.1-3
Three epochs of fine tuning with @jondurbin's SpicyBoros-3.1 dataset. 5.0bpw and 5.15bpw should fit on a single 3090/4090 (may need to enable 8-bit cache), 6.0bpw, and 8.0bpw will require more than one GPU 24 GB VRAM GPU.
**Please note:** you may have to turn down repetition penalty to ~1.0. The model seems to get into "thesaurus" mode sometimes without this change.
# Original Yi-34B Model Card Below
<div align="center">
<h1>
Yi
</h1>
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two base models with the parameter size of 6B and 34B.
## News
- 🎯 **2023/11/02**: The base model of `Yi-6B` and `Yi-34B`
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Commonsense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :-------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | 39.8 |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 26.0 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| **Yi-34B** | **76.3** | **83.7** | **81.4** | **82.8** | **54.3** | **80.1** | **76.4** | **37.1** |
While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCampus). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that did not report by original author (including score reported with different setting), we try to get results with our pipeline.
To extensively evaluate model's capability, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted in a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
## Disclaimer
Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.
## License
The Yi series model must be adhere to the [Model License Agreement](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE).
For any questions related to licensing and copyright, please contact us ([[email protected]](mailto:[email protected])).
|
aloobun/TinyAiroboros-2.2.1 | aloobun | 2023-11-19T17:00:18Z | 10 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"airoboros",
"tinyllama",
"dataset:jondurbin/airoboros-2.2.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-19T20:14:47Z | ---
datasets:
- jondurbin/airoboros-2.2.1
tags:
- airoboros
- tinyllama
---
This model is a fine-tuned version of PY007/TinyLlama-1.1B-Chat-v0.3 (finetuned on 15k rows of airoboros-2.2.1 dataset)
## lm-eval
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.2671|± |0.0129|
| | |acc_norm|0.2850|± |0.0132|
|arc_easy | 0|acc |0.5673|± |0.0102|
| | |acc_norm|0.5109|± |0.0103|
|boolq | 1|acc |0.6040|± |0.0086|
|hellaswag | 0|acc |0.4155|± |0.0049|
| | |acc_norm|0.5420|± |0.0050|
|openbookqa | 0|acc |0.2200|± |0.0185|
| | |acc_norm|0.3420|± |0.0212|
|piqa | 0|acc |0.7057|± |0.0106|
| | |acc_norm|0.6970|± |0.0107|
|winogrande | 0|acc |0.5714|± |0.0139|
```
## Usage:
```
from transformers import AutoTokenizer
import transformers
import torch
model = "aloobun/TinyAiroboros-2.2.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "Write a short story about a dystopian society."
sequences = pipeline(
f'[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=1024,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
e-n-v-y/envy-primordial-xl-01 | e-n-v-y | 2023-11-19T16:56:46Z | 4,758 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"landscapes",
"city",
"concept",
"architecture",
"scifi",
"scenery",
"kaiju",
"fantasy",
"kaijuu",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-11-19T16:56:45Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- landscapes
- city
- concept
- architecture
- scifi
- scenery
- kaiju
- fantasy
- kaijuu
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: primordial
widget:
- text: 'primordial, anime style, digital painting, Robotic Research Stations'
output:
url: >-
3753083.jpeg
- text: 'primordial, anime style, digital painting, open courtyard in a techno-pessimist,solemn scifi megastructure at the end of the multiverse, masterpiece'
output:
url: >-
3753079.jpeg
- text: 'primordial, anime style, digital painting, Fairy Glade, haze'
output:
url: >-
3752990.jpeg
- text: 'primordial, anime style, digital painting, Rainbow Falls, haze'
output:
url: >-
3752993.jpeg
- text: 'primordial, anime style, digital painting, High-Pressure Impact Craters, haze'
output:
url: >-
3752996.jpeg
- text: 'primordial, anime style, digital painting, a dingy,indescribable scifi topia beyond the end of reality, masterpiece, haze'
output:
url: >-
3753001.jpeg
- text: 'primordial, anime style, digital painting, Reaper''s Soul Plains, <!haze!'
output:
url: >-
3753028.jpeg
- text: 'primordial, anime style, digital painting, a techno-optimist,solemn fantasy subterranean megacity outside of the universe, masterpiece'
output:
url: >-
3753032.jpeg
- text: 'primordial, anime style, digital painting, noon, architecture, "at the Transdimensional Constellation"'
output:
url: >-
3753034.jpeg
- text: 'primordial, anime style, digital painting, morning, blue sky, clouds, scenery, in a Wetlands'
output:
url: >-
3753047.jpeg
---
# Envy Primordial XL 01
<Gallery />
<p>This LoRA adds insanely massive structures and kaijus to your cities and landscapes, and scales everything way up. Trigger word is "primordial". If you don't want kaijuus, add "kaijuu" to your negative prompt.</p>
## Image examples for the model:

> primordial, anime style, digital painting, open courtyard in a techno-pessimist,solemn scifi megastructure at the end of the multiverse, masterpiece

> primordial, anime style, digital painting, Fairy Glade, haze

> primordial, anime style, digital painting, Rainbow Falls, haze

> primordial, anime style, digital painting, High-Pressure Impact Craters, haze

> primordial, anime style, digital painting, a dingy,indescribable scifi topia beyond the end of reality, masterpiece, haze

> primordial, anime style, digital painting, Reaper's Soul Plains, <!haze!

> primordial, anime style, digital painting, a techno-optimist,solemn fantasy subterranean megacity outside of the universe, masterpiece

> primordial, anime style, digital painting, noon, architecture, "at the Transdimensional Constellation"

> primordial, anime style, digital painting, morning, blue sky, clouds, scenery, in a Wetlands
|
Anwaarma/Improved-MARBERT-twitter-sentiment-nodroput-Twitter | Anwaarma | 2023-11-19T16:54:38Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:UBC-NLP/MARBERTv2",
"base_model:finetune:UBC-NLP/MARBERTv2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T16:51:00Z | ---
base_model: UBC-NLP/MARBERTv2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-MARBERT-twitter-sentiment-nodroput-Twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-MARBERT-twitter-sentiment-nodroput-Twitter
This model is a fine-tuned version of [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9314
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5838 | 0.55 | 50 | 0.6058 | 0.71 |
| 0.3547 | 1.1 | 100 | 0.3875 | 0.84 |
| 0.2735 | 1.65 | 150 | 0.3429 | 0.87 |
| 0.1925 | 2.2 | 200 | 0.3658 | 0.87 |
| 0.1676 | 2.75 | 250 | 0.5335 | 0.83 |
| 0.1366 | 3.3 | 300 | 0.5710 | 0.81 |
| 0.1202 | 3.85 | 350 | 0.5037 | 0.84 |
| 0.0843 | 4.4 | 400 | 0.5798 | 0.83 |
| 0.086 | 4.95 | 450 | 0.9457 | 0.79 |
| 0.0589 | 5.49 | 500 | 1.1547 | 0.76 |
| 0.0599 | 6.04 | 550 | 0.8437 | 0.83 |
| 0.036 | 6.59 | 600 | 0.9878 | 0.78 |
| 0.0557 | 7.14 | 650 | 0.7223 | 0.86 |
| 0.041 | 7.69 | 700 | 0.7275 | 0.85 |
| 0.0256 | 8.24 | 750 | 0.7327 | 0.85 |
| 0.0273 | 8.79 | 800 | 0.7270 | 0.84 |
| 0.0165 | 9.34 | 850 | 0.8266 | 0.85 |
| 0.0154 | 9.89 | 900 | 0.7583 | 0.87 |
| 0.0075 | 10.44 | 950 | 0.8894 | 0.85 |
| 0.0073 | 10.99 | 1000 | 0.8858 | 0.85 |
| 0.0126 | 11.54 | 1050 | 0.9245 | 0.84 |
| 0.0047 | 12.09 | 1100 | 0.9335 | 0.84 |
| 0.0073 | 12.64 | 1150 | 0.9405 | 0.85 |
| 0.0112 | 13.19 | 1200 | 0.9801 | 0.84 |
| 0.0068 | 13.74 | 1250 | 0.9314 | 0.85 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Anwaarma/Improved-Arabert-twitter-sentiment-Twitter | Anwaarma | 2023-11-19T16:46:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02-twitter",
"base_model:finetune:aubmindlab/bert-base-arabertv02-twitter",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T16:43:55Z | ---
base_model: aubmindlab/bert-base-arabertv02-twitter
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-Arabert-twitter-sentiment-Twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-Arabert-twitter-sentiment-Twitter
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6342
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5264 | 0.55 | 50 | 0.5252 | 0.71 |
| 0.3041 | 1.1 | 100 | 0.4085 | 0.81 |
| 0.2205 | 1.65 | 150 | 0.3303 | 0.88 |
| 0.1476 | 2.2 | 200 | 0.3889 | 0.87 |
| 0.1219 | 2.75 | 250 | 0.3775 | 0.87 |
| 0.0972 | 3.3 | 300 | 0.3929 | 0.88 |
| 0.0917 | 3.85 | 350 | 0.4727 | 0.86 |
| 0.0596 | 4.4 | 400 | 0.4406 | 0.89 |
| 0.0556 | 4.95 | 450 | 0.4949 | 0.89 |
| 0.0375 | 5.49 | 500 | 0.4935 | 0.9 |
| 0.0269 | 6.04 | 550 | 0.5976 | 0.88 |
| 0.0235 | 6.59 | 600 | 0.5543 | 0.89 |
| 0.0191 | 7.14 | 650 | 0.5941 | 0.88 |
| 0.0109 | 7.69 | 700 | 0.6562 | 0.89 |
| 0.0198 | 8.24 | 750 | 0.6342 | 0.89 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Anwaarma/Improved-Arabert-twitter-sentiment-No-dropout-Twitter | Anwaarma | 2023-11-19T16:43:41Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02-twitter",
"base_model:finetune:aubmindlab/bert-base-arabertv02-twitter",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T16:41:00Z | ---
base_model: aubmindlab/bert-base-arabertv02-twitter
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Improved-Arabert-twitter-sentiment-No-dropout-Twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Improved-Arabert-twitter-sentiment-No-dropout-Twitter
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6342
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5264 | 0.55 | 50 | 0.5252 | 0.71 |
| 0.3041 | 1.1 | 100 | 0.4085 | 0.81 |
| 0.2205 | 1.65 | 150 | 0.3303 | 0.88 |
| 0.1476 | 2.2 | 200 | 0.3890 | 0.87 |
| 0.1219 | 2.75 | 250 | 0.3775 | 0.87 |
| 0.0972 | 3.3 | 300 | 0.3930 | 0.88 |
| 0.0917 | 3.85 | 350 | 0.4728 | 0.86 |
| 0.0596 | 4.4 | 400 | 0.4406 | 0.89 |
| 0.0556 | 4.95 | 450 | 0.4949 | 0.89 |
| 0.0375 | 5.49 | 500 | 0.4935 | 0.9 |
| 0.0269 | 6.04 | 550 | 0.5977 | 0.88 |
| 0.0235 | 6.59 | 600 | 0.5543 | 0.89 |
| 0.0191 | 7.14 | 650 | 0.5941 | 0.88 |
| 0.0109 | 7.69 | 700 | 0.6562 | 0.89 |
| 0.0198 | 8.24 | 750 | 0.6342 | 0.89 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Asanokurokai/ppo-Huggy | Asanokurokai | 2023-11-19T16:42:38Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-19T16:42:32Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Asanokurokai/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ByteWave/Cheus-11B | ByteWave | 2023-11-19T16:37:30Z | 1,501 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T14:47:11Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Cheus-11B by ByteWave
<img src="_435ebdc5-211c-4fb6-a175-861ffe30e68f.jpeg" width="300" height="200" alt="Cheus-11B">
Merge of [lvkaokao/mistral-7b-finetuned-orca-dpo-v2](lvkaokao/mistral-7b-finetuned-orca-dpo-v2) and [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Coming soon]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | Coming soon |
| ARC (25-shot) | Coming soon |
| HellaSwag (10-shot) | Coming soon |
| MMLU (5-shot) | Coming soon |
| TruthfulQA (0-shot) | Coming soon |
| Winogrande (5-shot) | Coming soon |
| GSM8K (5-shot) | Coming soon |
| DROP (3-shot) | Coming soon | |
thevox/en-nb-7b | thevox | 2023-11-19T16:34:21Z | 13 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"translation",
"en",
"no",
"nb",
"dataset:thevox/en-nb-15k",
"license:mpl-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T10:43:42Z | ---
language:
- en
- 'no'
- nb
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
- translation
inference: true
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
datasets:
- thevox/en-nb-15k
license: mpl-2.0
metrics:
- perplexity
pipeline_tag: text-generation
---
# Model Card
## Summary
English to Norwegian translation model, to rival DeepL natural translations.
Context length is 1024 for input and output (2048).
Outputs context, translation and improved translation.
### Training
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
Trained in 4-Bit with Lora (R=64) for 2 epochs. Perplexity on validation: 1.245
**Hardware**: 1x A100 80GB for 12 hours
## Usage
### Input
Recommended prompt format:
```
<|prompt|>Oversett til Norsk:
</s><|answer|>
```
### Inference
Recommended to use 2-4 beams when generating.
### Code
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.34.0
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCES_TOKEN>)
```
- Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="thevox/en-nb-7b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"thevox/en-nb-7b",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"thevox/en-nb-7b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "thevox/en-nb-7b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
LoneStriker/Yi-34B-Spicyboros-3.1-3-3.0bpw-h6-exl2 | LoneStriker | 2023-11-19T16:33:34Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:unalignment/spicy-3.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-19T16:25:16Z | ---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- unalignment/spicy-3.1
---
# Fine-tune of Y-34B with Spicyboros-3.1-3
Three epochs of fine tuning with @jondurbin's SpicyBoros-3.1 dataset. 5.0bpw and 5.15bpw should fit on a single 3090/4090 (may need to enable 8-bit cache), 6.0bpw, and 8.0bpw will require more than one GPU 24 GB VRAM GPU.
**Please note:** you may have to turn down repetition penalty to ~1.0. The model seems to get into "thesaurus" mode sometimes without this change.
# Original Yi-34B Model Card Below
<div align="center">
<h1>
Yi
</h1>
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two base models with the parameter size of 6B and 34B.
## News
- 🎯 **2023/11/02**: The base model of `Yi-6B` and `Yi-34B`
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Commonsense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :-------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | 39.8 |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 26.0 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| **Yi-34B** | **76.3** | **83.7** | **81.4** | **82.8** | **54.3** | **80.1** | **76.4** | **37.1** |
While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCampus). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that did not report by original author (including score reported with different setting), we try to get results with our pipeline.
To extensively evaluate model's capability, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted in a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
## Disclaimer
Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.
## License
The Yi series model must be adhere to the [Model License Agreement](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE).
For any questions related to licensing and copyright, please contact us ([[email protected]](mailto:[email protected])).
|
Binou/vit-base-plankton | Binou | 2023-11-19T16:30:47Z | 39 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilevit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:apple/mobilevit-xx-small",
"base_model:finetune:apple/mobilevit-xx-small",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-18T13:09:03Z | ---
license: other
base_model: apple/mobilevit-xx-small
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-plankton
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: plankton_fairscope
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8050847457627118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-plankton
This model is a fine-tuned version of [apple/mobilevit-xx-small](https://huggingface.co/apple/mobilevit-xx-small) on the plankton_fairscope dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7642
- Accuracy: 0.8051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5476 | 0.52 | 100 | 1.2745 | 0.7419 |
| 1.0997 | 1.04 | 200 | 0.8653 | 0.7842 |
| 0.9498 | 1.56 | 300 | 0.7642 | 0.8051 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mubashirsaeed/care-bot-harry-potter-falcon-7b-3 | mubashirsaeed | 2023-11-19T16:23:37Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"region:us"
]
| null | 2023-11-19T16:23:26Z | ---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
jbochi/madlad400-7b-mt-bt | jbochi | 2023-11-19T16:16:23Z | 76 | 6 | transformers | [
"transformers",
"safetensors",
"gguf",
"t5",
"text2text-generation",
"text-generation-inference",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"arxiv:2309.04662",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-11-06T12:53:23Z | ---
license: apache-2.0
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- "no"
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
library_name: transformers
tags:
- text2text-generation
- text-generation-inference
datasets:
- allenai/MADLAD-400
pipeline_tag: translation
widget:
- text: "<2en> Como vai, amigo?"
example_title: "Translation to English"
- text: "<2de> Do you speak German?"
example_title: "Translation to German"
---
# Model Card for MADLAD-400-7B-MT
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
# TL;DR
MADLAD-400-7B-MT-BT is a multilingual machine translation model based on the T5 architecture that was
trained on 250 billion tokens covering over 450 languages using publicly available data.
It is competitive with models that are significantly larger.
It's a finetuned version of the 7.2B parameter model on backtranslated data. Authors say in the [paper](https://arxiv.org/pdf/2309.04662.pdf) that:
> While this setup is very likely sub-optimal, we see that back-translation
> greatly improves en2xx translation (by 3.0 chrf, in the case of Flores-200) in most cases.
**Disclaimer**: [Juarez Bochi](https://huggingface.co/jbochi), who was not involved in this research, converted
the original weights and wrote the contents of this model card based on the original paper and Flan-T5.
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** Multilingual (400+ languages)
- **License:** Apache 2.0
- **Related Models:** [All MADLAD-400 Checkpoints](https://huggingface.co/models?search=madlad)
- **Original Checkpoints:** [All Original MADLAD-400 Checkpoints](https://github.com/google-research/google-research/tree/master/madlad_400)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2309.04662)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face MADLAD-400 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/MADLAD-400) - [Pending PR](https://github.com/huggingface/transformers/pull/27471)
# Usage
Find below some example scripts on how to use the model:
## Using the Pytorch model with `transformers`
### Running the model on a CPU or GPU
<details>
<summary> Click to expand </summary>
First, install the Python packages that are required:
`pip install transformers accelerate sentencepiece`
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = 'jbochi/madlad400-7b-mt-bt'
model = T5ForConditionalGeneration.from_pretrained(model_name, device_map="auto")
tokenizer = T5Tokenizer.from_pretrained(model_name)
text = "<2pt> I love pizza!"
input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids=input_ids)
tokenizer.decode(outputs[0], skip_special_tokens=True)
# Eu adoro pizza!
```
</details>
## Running the model with Candle
<details>
<summary> Click to expand </summary>
Usage with [candle](https://github.com/huggingface/candle):
```bash
$ cargo run --example t5 --release -- \
--model-id "jbochi/madlad400-7b-mt-bt" \
--prompt "<2de> How are you, my friend?" \
--decode --temperature 0
```
</details>
# Uses
## Direct Use and Downstream Use
> Primary intended uses: Machine Translation and multilingual NLP tasks on over 400 languages.
> Primary intended users: Research community.
## Out-of-Scope Use
> These models are trained on general domain data and are therefore not meant to
> work on domain-specific models out-of-the box. Moreover, these research models have not been assessed
> for production usecases.
# Bias, Risks, and Limitations
> We note that we evaluate on only 204 of the languages supported by these models and on machine translation
> and few-shot machine translation tasks. Users must consider use of this model carefully for their own
> usecase.
## Ethical considerations and risks
> We trained these models with MADLAD-400 and publicly available data to create baseline models that
> support NLP for over 400 languages, with a focus on languages underrepresented in large-scale corpora.
> Given that these models were trained with web-crawled datasets that may contain sensitive, offensive or
> otherwise low-quality content despite extensive preprocessing, it is still possible that these issues to the
> underlying training data may cause differences in model performance and toxic (or otherwise problematic)
> output for certain domains. Moreover, large models are dual use technologies that have specific risks
> associated with their use and development. We point the reader to surveys such as those written by
> Weidinger et al. or Bommasani et al. for a more detailed discussion of these risks, and to Liebling
> et al. for a thorough discussion of the risks of machine translation systems.
## Known Limitations
More information needed
## Sensitive Use:
More information needed
# Training Details
> We train models of various sizes: a 3B, 32-layer parameter model,
> a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model.
> We share all parameters of the model across language pairs,
> and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder
> side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target
> language.
See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details.
## Training Data
> For both the machine translation and language model, MADLAD-400 is used. For the machine translation
> model, a combination of parallel datasources covering 157 languages is also used. Further details are
> described in the [paper](https://arxiv.org/pdf/2309.04662.pdf).
## Training Procedure
See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
> For evaluation, we used WMT, NTREX, Flores-200 and Gatones datasets as described in Section 4.3 in the [paper](https://arxiv.org/pdf/2309.04662.pdf).
> The translation quality of this model varies based on language, as seen in the paper, and likely varies on
> domain, though we have not assessed this.
## Results



See the [research paper](https://arxiv.org/pdf/2309.04662.pdf) for further details.
# Environmental Impact
More information needed
# Citation
**BibTeX:**
```bibtex
@misc{kudugunta2023madlad400,
title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
year={2023},
eprint={2309.04662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Factiverse/bart-large-claimdecomp | Factiverse | 2023-11-19T16:15:38Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-19T13:44:42Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/12/2f/122ff7680a102ac1723ec09fbb892b6e58189956054aea9b38d617b2f6b4fdd0/3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739047359&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0NzM1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzEyLzJmLzEyMmZmNzY4MGExMDJhYzE3MjNlYzA5ZmJiODkyYjZlNTgxODk5NTYwNTRhZWE5YjM4ZDYxN2IyZjZiNGZkZDAvM2UwZTE1ZmEwYzVjYzgxNjc1YmQ2OWFmOGViNDY5ZDEyOGE3MjVjMWE3YmZjNzFmMDNiNzg3N2I3YjY1MDU2Nz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=AZqIOWg9vvq6woeJkIblt0Yk8qjM3TGFVqSRA6Gz2XTHRKUjGtF1iyxArO57SZb9cLjhSgDnkp9g67tfkUvI5iiFxEUMlYOQ-TmESNzuJ7X-Q6637t0nOBHgMP1gOAErUcfbrudjGn-PbxWR2kfARDxuCpywlW-pIcJIqX8meZJqTgT97scbfTyO8ByzsB3rtiepbV6KstJk713pnSY2UtOsDpwlpS3TTwbBPk03%7E94opvN-TKMlWRL0R8ONaPBYwa6K0n4T0cPttG-PR8jf-lKI%7EbU5dfAInZAaR3Lv1CC-Ri35NwmmvniKlMCu04ZQFV9PScbks2UsYndxvugSOQ__&Key-Pair-Id=K24J24Z295AEI9 |
Ichsan2895/Merak-7B-v4 | Ichsan2895 | 2023-11-19T16:14:53Z | 75 | 3 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"arxiv:2306.02707",
"arxiv:2305.14314",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-11T20:42:22Z | ---
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# HAPPY TO ANNOUNCE THE RELEASE OF MERAK-7B-V4!
Merak-7B is the Large Language Model of Indonesian Language
This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models.
Feel free, to ask me about the model and please share the news on your social media.
## HOW TO USE
### Installation
Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
```
pip install protobuf==4.24.4
pip install bitsandbytes==0.41.1
pip install transformers==4.34.1
pip install peft==0.5.0
pip install accelerate==0.23.0
pip install einops==0.6.1 scipy sentencepiece datasets
```
### Using BitsandBytes and it run with >= 10 GB VRAM GPU
[](https://colab.research.google.com/drive/1Tj15gNIx3KnLarDAJdwpa7qXa5nmfAM-?usp=drive_link)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v4"
config = AutoConfig.from_pretrained(model_id)
BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(model_id,
quantization_config=BNB_CONFIG,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
chat = [
{"role": "system", "content": "Anda adalah Merak, sebuah model kecerdasan buatan yang dilatih oleh Muhammad Ichsan. Mohon jawab pertanyaan berikut dengan benar, faktual, dan ramah."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"),
attention_mask=inputs.attention_mask,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=256)
response = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
assistant_start = f'''{question} \n assistant\n '''
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
[](https://colab.research.google.com/drive/1KVkiaKddrK4focgQJ6ysUA1NypLQPYuF?usp=drive_link)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v4"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
chat = [
{"role": "system", "content": "Anda adalah Merak, sebuah model kecerdasan buatan yang dilatih oleh Muhammad Ichsan. Mohon jawab pertanyaan berikut dengan benar, faktual, dan ramah."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"),
attention_mask=inputs.attention_mask,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=256)
response = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
assistant_start = f'''{question} \n assistant\n '''
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
## CHANGELOG
**v4** = We use [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) instead of Llama-2-Chat-HF. We did it throught uncounted trial-and-error. We pick the best one to do this model.
What we have done so far:
1st). We fine tuned it with Wikipedia articles that we cleaned it before. It use QLora and speed up by Deepspeed Zero 2 for 1 epoch. Axolotl was used for easier fine tuning configuration.
2nd). We got extra funds. Thanks all.. We did it again like first step but it was Full Parameter fine tuning (FFT) instead of QLora.
3rd). We fine tuned it with [Ichsan2895/OASST_Top1_Indonesian](https://huggingface.co/datasets/Ichsan2895/OASST_Top1_Indonesian) & [Ichsan2895/alpaca-gpt4-indonesian](https://huggingface.co/datasets/Ichsan2895/alpaca-gpt4-indonesian) with minor modification, so it was suitable with ChatML format. It was FFT for 4 epochs.
**v3** = Fine tuned by [Ichsan2895/OASST_Top1_Indonesian](https://huggingface.co/datasets/Ichsan2895/OASST_Top1_Indonesian) & [Ichsan2895/alpaca-gpt4-indonesian](https://huggingface.co/datasets/Ichsan2895/alpaca-gpt4-indonesian)
**v2** = Finetuned version of first Merak-7B model. We finetuned again with the same ID Wikipedia articles except it changes prompt-style in the questions. It has 600k ID wikipedia articles.
**v1** = The first Merak-7B model. We selected and cleaned about 200k ID wikipedia articles.
## CITATION
```
@software{lian2023mistralorca1
title = {MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
@article{dettmers2023qlora,
title = {QLoRA: Efficient Finetuning of Quantized LLMs},
author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal = {arXiv preprint arXiv:2305.14314},
year = {2023}
}
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## HOW TO CITE THIS PROJECT
If you use the Merak-7B model in your research or project, please cite it as:
```
@article{Merak,
title={Merak-7B: The LLM for Bahasa Indonesia},
author={Muhammad Ichsan},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
``` |
Ichsan2895/Merak-7B-v4-GGUF | Ichsan2895 | 2023-11-19T16:14:28Z | 151 | 4 | null | [
"gguf",
"text-generation",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"arxiv:2306.02707",
"arxiv:2305.14314",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-11T23:23:37Z | ---
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
other: mistral
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# HAPPY TO ANNOUNCE THE RELEASE OF MERAK-7B-V4-GGUF!
Merak-7B is the Large Language Model of Indonesian Language
This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Merak-7B and all of its derivatives are Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0). Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models.
Feel free, to ask me about the model and please share the news on your social media.
## HOW TO USE
### What is GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
### What is the software that support GGUF
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
### Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
### Provided files
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Merak-7B-v4-model-Q2_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q2_k.gguf) | Q2_K | 2 | 3.08 GB| smallest, significant quality loss - not recommended for most purposes |
| [Merak-7B-v4-model-Q3_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
| [Merak-7B-v4-model-Q4_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Merak-7B-v4-model-Q4_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
| [Merak-7B-v4-model-Q5_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q5_0.gguf) | Q5_0 | 5 | 5 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Merak-7B-v4-model-Q5_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |
| [Merak-7B-v4-model-Q6_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q6_k.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss |
| [Merak-7B-v4-model-Q8_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-GGUF/blob/main/Merak-7B-v4-model-q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
</details>
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: Ichsan2895/Merak-7B-v4-GGUF and below it, a specific filename to download, such as: Merak-7B-v4-model-q5_k_m.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download Ichsan2895/Merak-7B-v4-GGUF Merak-7B-v4-model-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download Ichsan2895/Merak-7B-v4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Ichsan2895/Merak-7B-v4-GGUF Merak-7B-v4-model-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m Merak-7B-v4-model-q5_k_m.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("Ichsan2895/Merak-7B-v4-GGUF", model_file="Merak-7B-v4-model-q5_k_m.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## CITATION
```
@software{lian2023mistralorca1
title = {MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
@article{dettmers2023qlora,
title = {QLoRA: Efficient Finetuning of Quantized LLMs},
author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal = {arXiv preprint arXiv:2305.14314},
year = {2023}
}
Special thanks to theBloke for his Readme.Md that We adopted in this model
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## HOW TO CITE THIS PROJECT
If you use the Merak-7B model in your research or project, please cite it as:
```
@article{Merak,
title={Merak-7B: The LLM for Bahasa Indonesia},
author={Muhammad Ichsan},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
``` |
masonanalytics/Intel-neural-chat-7b-v3-FullChat-512 | masonanalytics | 2023-11-19T16:12:58Z | 3 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Intel/neural-chat-7b-v3",
"base_model:adapter:Intel/neural-chat-7b-v3",
"region:us"
]
| null | 2023-11-19T16:12:18Z | ---
library_name: peft
base_model: Intel/neural-chat-7b-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
Ichsan2895/Merak-7B-v1 | Ichsan2895 | 2023-11-19T16:09:58Z | 55 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"id",
"en",
"dataset:wikipedia",
"arxiv:2307.09288",
"arxiv:2305.14314",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-23T08:37:17Z | ---
datasets:
- wikipedia
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v1/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# Happy to announce the release of our first model, Merak-7B!
Merak-7B is the Large Language Model of Indonesia Languange
This model is based on Meta Llama-2-7B-Chat-HF and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Merak-7B and all of its derivatives are Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0). Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Feel free, to ask me about the model and please share the news on your social media.
## HOW TO USE
### Installation
Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
```
pip install bitsandbytes==0.39.1
pip install transformers==4.31.0
pip install peft==0.4.0
pip install accelerate==0.20.3
pip install einops==0.6.1 scipy sentencepiece datasets
```
### Using BitsandBytes and it run with >= 10 GB VRAM GPU
[](https://colab.research.google.com/drive/1USKJ7HQaxZlHrdi_qFv3B2_GUrvaWgg1?usp=sharing)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)
BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(model_id,
quantization_config=BNB_CONFIG,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
prompt = f"<|prompt|>{question}<|answer|>".strip()
encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
with torch.inference_mode():
outputs = model.generate(input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
eos_token_id=tokenizer.pad_token_id,
do_sample=False,
num_beams=2,
temperature=0.3,
repetition_penalty=1.2,
max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokes=True)
assistant_start = "<|answer|>"
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
[](https://colab.research.google.com/drive/1m6pIbJIKtu7T4lRlCiw7HTPSw16hSrPJ?usp=sharing)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
prompt = f"<|prompt|>{question}<|answer|>".strip()
encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
with torch.inference_mode():
outputs = model.generate(input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
eos_token_id=tokenizer.pad_token_id,
do_sample=False,
num_beams=2,
temperature=0.3,
repetition_penalty=1.2,
max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokes=True)
assistant_start = "<|answer|>"
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
## CITATION
```
@Paper{arXiv,
author = {Touvron, et al},
title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
journal = {arXiv preprint arXiv:2307.09288},
year = {2023}
}
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
@article{dettmers2023qlora,
title = {QLoRA: Efficient Finetuning of Quantized LLMs},
author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal = {arXiv preprint arXiv:2305.14314},
year = {2023}
}
```
## HOW TO CITE THIS PROJECT
If you use the Merak-7B model in your research or project, please cite it as:
```
@article{Merak,
title={Merak-7B: The LLM for Bahasa Indonesia},
author={Muhammad Ichsan},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
``` |
MayIBorn/mrpc-debertav3_deberta_initialize_dW_with_svd_from_back | MayIBorn | 2023-11-19T16:09:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/deberta-v3-base",
"base_model:adapter:microsoft/deberta-v3-base",
"region:us"
]
| null | 2023-11-19T16:09:02Z | ---
library_name: peft
base_model: microsoft/deberta-v3-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
StevenPerrin/q-Taxi-v3 | StevenPerrin | 2023-11-19T16:08:27Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T16:08:26Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="StevenPerrin/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abrahamtek/CartPole-v1 | abrahamtek | 2023-11-19T16:08:17Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T16:08:14Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 65.20 +/- 11.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LarryAIDraw/hasumi_bluearchive | LarryAIDraw | 2023-11-19T15:34:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-19T15:34:18Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/121984?modelVersionId=155678 |
DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R | DAMO-NLP-SG | 2023-11-19T15:34:38Z | 67 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"Zero-Shot Classification",
"zero-shot-classification",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2305.11442",
"license:mit",
"autotrain_compatible",
"region:us"
]
| zero-shot-classification | 2023-08-14T03:29:20Z | ---
inference: false
license: mit
tags:
- Zero-Shot Classification
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
pipeline_tag: zero-shot-classification
metrics:
- accuracy
---
# Zero-shot text classification (multilingual version) trained with self-supervised tuning
Zero-shot text classification model trained with self-supervised tuning (SSTuning).
It was introduced in the paper [Zero-Shot Text Classification via Self-Supervised Tuning](https://arxiv.org/abs/2305.11442) by
Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
The model backbone is xlm-roberta-base.
## Model description
The model is tuned with unlabeled data using a first sentence prediction (FSP) learning objective.
The FSP task is designed by considering both the nature of the unlabeled corpus and the input/output format of classification tasks.
The training and validation sets are constructed from the unlabeled corpus using FSP.
During tuning, BERT-like pre-trained masked language
models such as RoBERTa and ALBERT are employed as the backbone, and an output layer for classification is added.
The learning objective for FSP is to predict the index of the correct label.
A cross-entropy loss is used for tuning the model.
## Model variations
There are four versions of models released. The details are:
| Model | Backbone | #params | lang | acc | Speed | #Train
|------------|-----------|----------|-------|-------|----|-------------|
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | En | Low | High | 20.48M |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | En | Medium | Medium | 5.12M |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | En | High | Low| 5.12M |
| [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | Multi | - | - | 20.48M |
Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as xlm-roberta supports.
Please check [this repository](https://github.com/DAMO-NLP-SG/SSTuning) for the performance of each model.
## Intended uses & limitations
The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
The number of labels should be 2 ~ 20.
### How to use
You can try the model with the Colab [Notebook](https://colab.research.google.com/drive/17bqc8cXFF-wDmZ0o8j7sbrQB9Cq7Gowr?usp=sharing).
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch, string, random
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R")
model = AutoModelForSequenceClassification.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R")
text = "I love this place! The food is always so fresh and delicious."
list_label = ["negative", "positive"]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
list_ABC = [x for x in string.ascii_uppercase]
def check_text(model, text, list_label, shuffle=False):
list_label = [x+'.' if x[-1] != '.' else x for x in list_label]
list_label_new = list_label + [tokenizer.pad_token]* (20 - len(list_label))
if shuffle:
random.shuffle(list_label_new)
s_option = ' '.join(['('+list_ABC[i]+') '+list_label_new[i] for i in range(len(list_label_new))])
text = f'{s_option} {tokenizer.sep_token} {text}'
model.to(device).eval()
encoding = tokenizer([text],truncation=True, max_length=512,return_tensors='pt')
item = {key: val.to(device) for key, val in encoding.items()}
logits = model(**item).logits
logits = logits if shuffle else logits[:,0:len(list_label)]
probs = torch.nn.functional.softmax(logits, dim = -1).tolist()
predictions = torch.argmax(logits, dim=-1).item()
probabilities = [round(x,5) for x in probs[0]]
print(f'prediction: {predictions} => ({list_ABC[predictions]}) {list_label_new[predictions]}')
print(f'probability: {round(probabilities[predictions]*100,2)}%')
check_text(model, text, list_label)
# prediction: 1 => (B) positive.
# probability: 99.92%
```
### BibTeX entry and citation info
```bibtxt
@inproceedings{acl23/SSTuning,
author = {Chaoqun Liu and
Wenxuan Zhang and
Guizhen Chen and
Xiaobao Wu and
Anh Tuan Luu and
Chip Hong Chang and
Lidong Bing},
title = {Zero-Shot Text Classification via Self-Supervised Tuning},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
year = {2023},
url = {https://arxiv.org/abs/2305.11442},
}
```
|
obnimorda/llama-2-13b-gptq-ru-sum-odd-lora | obnimorda | 2023-11-19T15:29:52Z | 4 | 1 | peft | [
"peft",
"ru",
"region:us"
]
| null | 2023-11-06T18:08:02Z | ---
language:
- ru
library_name: peft
---
# Llama 2 13B - GPTQ fine-tuned for arithmetical reasoning task
Based on [Llama 2 13B - GPTQ](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ/tree/gptq-4bit-32g-actorder_True).
This version of the model is adapter-only.
Llama 2 13B has been fine-tuned for a specific task: to check if the sum of odd numbers from a certain group results in an even total. This fine-tuning aims to illustrate the [example](https://obnimorda.ru/guides/prompt/introduction/examples/#задачи-на-логику) in the Russian guide.
As a demonstration model, it often makes errors in arithmetic and logic, but generally produces results in the correct format.
## How to use this model from Python code
Before running this code, ensure that all necessary software is installed.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from peft.peft_model import PeftModel
MODEL_ID = 'TheBloke/Llama-2-13B-GPTQ' # Идентификатор предобученной модели в репозитории huggingface.co
MODEL_REVISION ='gptq-4bit-32g-actorder_True' # Версия модели: ветка репозитория. Указываем, т. к. это не main
LORA_ID = 'obnimorda/llama-2-13b-gptq-ru-sum-odd-lora' # Идентификатор адаптера LoRA в репозитории huggingface.co
# Загрузка предобученной модели
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
revision=MODEL_REVISION,
device_map="auto", # Автоматический выбор устройства для загрузки и запуска модели на основе доступности и совместимости
)
# Загрузка адаптера LoRA и его применение к модели, загруженной на предыдущем шаге
model = PeftModel.from_pretrained(
model,
LORA_ID
)
# Входные данные
#prompt = "Если сложить нечетные числа этого множества, получится нечетное число: 15, 32, 5, 13, 82, 7, 1.\nОтвет:"
prompt = "Если сложить нечетные числа этого множества, получится нечетное число: 15, 32, 5, 13, 82, 7, 1.\nДля проверки выполните действия по порядку. Сначала найдите все нечетные числа, затем сложите их и определите, четным или нечетным является результат.\n\n"
print('prompt:', prompt)
# Токенизатор для преобразования входных данных и результата генерации: создается из файлов токенизатора, полученных с моделью
tokenizer = AutoTokenizer.from_pretrained(
MODEL_ID,
revision=MODEL_REVISION,
use_fast=False,
legacy=True,
)
# Токенизация входных данных
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
# Конфигурация для генерации последовательности: создается из конфигурационного файла, полученного с моделью
config = GenerationConfig.from_pretrained(
MODEL_ID,
revision=MODEL_REVISION,
)
# Установка пользовательских параметров генерации
config.do_sample = True
config.max_new_tokens = 100
config.repetition_penalty = 1.15
config.temperature = 0.7
config.top_k = 20
config.top_p = 1
# Генерация последовательности
output_ids = model.generate(
inputs=input_ids,
generation_config=config
)
# Декодирование результата генерации
output = tokenizer.decode(output_ids[0])
print('output:', output)
``` |
Felladrin/mlc-chat-Mistral-7B-Instruct-v0.1-q4f32_1 | Felladrin | 2023-11-19T15:23:39Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-11-02T11:00:06Z | ---
license: apache-2.0
---
# Mistral 7B Instruct v0.1 for Web-LLM q4f32_1
This is a compiled version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) for [MLC Web-LLM](https://webllm.mlc.ai/), using `q4f32_1` quantization.
## Usage
```javascript
import * as webLLM from "@mlc-ai/web-llm";
const modelId = "Mistral-7B-Instruct-v0.1-q4f32_1";
const appConfig = {
model_list: [
{
model_url:
"https://huggingface.co/Felladrin/mlc-chat-Mistral-7B-Instruct-v0.1-q4f32_1/resolve/main/params/",
local_id: modelId,
},
],
model_lib_map: {
[modelId]:
"https://huggingface.co/Felladrin/mlc-chat-Mistral-7B-Instruct-v0.1-q4f32_1/resolve/main/Mistral-7B-Instruct-v0.1-q4f32_1-webgpu.wasm",
},
};
const chatConfig = {
temperature: 0,
repetition_penalty: 1.2,
top_p: 1
};
async function main() {
const chat = new webLLM.ChatModule();
await chat.reload(modelId, chatConfig, appConfig);
let lastResponse = "";
const generateProgressCallback = (_, message = "") => {
if (message.length === 0) return chat.interruptGenerate();
lastResponse = message;
console.log(`Partial response: ${lastResponse}`);
};
const fistPrompt = "Could answer some questions?";
await chat.generate(fistPrompt, generateProgressCallback);
console.log(`Complete response: ${lastResponse}`);
const secondPrompt = "What's Mistral?";
await chat.generate(secondPrompt, generateProgressCallback);
console.log(`Complete response: ${lastResponse}`);
console.info(await chat.runtimeStatsText());
}
main();
``` |
Hadjer/mobilebert-uncased-squad-v1-finetuned-squad | Hadjer | 2023-11-19T15:15:43Z | 58 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilebert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:csarron/mobilebert-uncased-squad-v1",
"base_model:finetune:csarron/mobilebert-uncased-squad-v1",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-11-14T15:40:07Z | ---
license: mit
base_model: csarron/mobilebert-uncased-squad-v1
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: mobilebert-uncased-squad-v1-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert-uncased-squad-v1-finetuned-squad
This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v1](https://huggingface.co/csarron/mobilebert-uncased-squad-v1) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5958 | 1.0 | 5533 | 0.9892 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mogmix/beavers | mogmix | 2023-11-19T15:02:12Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T15:02:02Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: beavers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# beavers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### beaver
 |
GuoZiming/CodeLlama-7b-spider-finetuning | GuoZiming | 2023-11-19T14:58:58Z | 2 | 0 | peft | [
"peft",
"en",
"dataset:philikai/SQL_Spider_DDL",
"license:mit",
"region:us"
]
| null | 2023-11-19T13:40:40Z | ---
library_name: peft
license: mit
datasets:
- philikai/SQL_Spider_DDL
language:
- en
---
## Training procedure
### Framework versions
- PEFT 0.4.0 |
hkivancoral/hushem_5x_deit_tiny_sgd_0001_fold2 | hkivancoral | 2023-11-19T14:58:28Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:53:16Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5542
- Accuracy: 0.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5719 | 1.0 | 27 | 1.7092 | 0.2222 |
| 1.5311 | 2.0 | 54 | 1.6949 | 0.2222 |
| 1.5151 | 3.0 | 81 | 1.6819 | 0.2222 |
| 1.5077 | 4.0 | 108 | 1.6712 | 0.2222 |
| 1.4707 | 5.0 | 135 | 1.6610 | 0.2222 |
| 1.4799 | 6.0 | 162 | 1.6507 | 0.2222 |
| 1.4704 | 7.0 | 189 | 1.6424 | 0.2 |
| 1.4902 | 8.0 | 216 | 1.6346 | 0.1778 |
| 1.4446 | 9.0 | 243 | 1.6280 | 0.1778 |
| 1.4231 | 10.0 | 270 | 1.6212 | 0.1778 |
| 1.4616 | 11.0 | 297 | 1.6153 | 0.1778 |
| 1.4153 | 12.0 | 324 | 1.6101 | 0.2 |
| 1.4152 | 13.0 | 351 | 1.6055 | 0.2 |
| 1.4531 | 14.0 | 378 | 1.6010 | 0.2 |
| 1.3945 | 15.0 | 405 | 1.5968 | 0.2 |
| 1.3852 | 16.0 | 432 | 1.5928 | 0.2 |
| 1.4109 | 17.0 | 459 | 1.5893 | 0.2 |
| 1.3754 | 18.0 | 486 | 1.5859 | 0.2 |
| 1.385 | 19.0 | 513 | 1.5829 | 0.2222 |
| 1.3607 | 20.0 | 540 | 1.5802 | 0.2222 |
| 1.3947 | 21.0 | 567 | 1.5776 | 0.2222 |
| 1.3764 | 22.0 | 594 | 1.5751 | 0.2222 |
| 1.382 | 23.0 | 621 | 1.5731 | 0.2222 |
| 1.3634 | 24.0 | 648 | 1.5711 | 0.2222 |
| 1.3778 | 25.0 | 675 | 1.5692 | 0.2222 |
| 1.3529 | 26.0 | 702 | 1.5678 | 0.2222 |
| 1.3485 | 27.0 | 729 | 1.5662 | 0.2222 |
| 1.3484 | 28.0 | 756 | 1.5647 | 0.2222 |
| 1.3554 | 29.0 | 783 | 1.5635 | 0.2222 |
| 1.3405 | 30.0 | 810 | 1.5624 | 0.2222 |
| 1.3634 | 31.0 | 837 | 1.5613 | 0.2222 |
| 1.3616 | 32.0 | 864 | 1.5602 | 0.2222 |
| 1.3289 | 33.0 | 891 | 1.5595 | 0.2222 |
| 1.3193 | 34.0 | 918 | 1.5588 | 0.2 |
| 1.3621 | 35.0 | 945 | 1.5580 | 0.2 |
| 1.3672 | 36.0 | 972 | 1.5575 | 0.2 |
| 1.3338 | 37.0 | 999 | 1.5569 | 0.2 |
| 1.3491 | 38.0 | 1026 | 1.5563 | 0.2 |
| 1.3543 | 39.0 | 1053 | 1.5559 | 0.2 |
| 1.3395 | 40.0 | 1080 | 1.5555 | 0.2 |
| 1.3385 | 41.0 | 1107 | 1.5553 | 0.2 |
| 1.3225 | 42.0 | 1134 | 1.5550 | 0.2 |
| 1.3557 | 43.0 | 1161 | 1.5547 | 0.2 |
| 1.3413 | 44.0 | 1188 | 1.5546 | 0.2 |
| 1.3386 | 45.0 | 1215 | 1.5544 | 0.2 |
| 1.3204 | 46.0 | 1242 | 1.5543 | 0.2 |
| 1.335 | 47.0 | 1269 | 1.5543 | 0.2 |
| 1.3373 | 48.0 | 1296 | 1.5542 | 0.2 |
| 1.3715 | 49.0 | 1323 | 1.5542 | 0.2 |
| 1.2935 | 50.0 | 1350 | 1.5542 | 0.2 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hamadandrabi/Microsoft_Phi_gsm8k | hamadandrabi | 2023-11-19T14:57:47Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"custom_code",
"dataset:gsm8k",
"arxiv:2309.05463",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-11-01T15:55:01Z | ---
datasets:
- gsm8k
---
The integration of the GSM8K dataset into the phi-1.5 language model enhances its ability to tackle math word problems. GSM8K, a meticulously curated collection of 8.5K grade school math problems, enriches phi-1.5 with a diverse range of mathematical challenges that require logical reasoning and multi-step problem-solving skills.
These include:
**Basic Arithmetic**: Solving problems involving addition, subtraction, multiplication, and division, which are foundational to understanding more complex mathematical concepts.
**Fractions and Percentages**: Handling questions that require the manipulation and understanding of fractions and percentages, crucial for everyday calculations.
**Word Problems**: Demonstrating an improved ability to parse and solve word problems, a key skill that bridges the gap between mathematical theory and real-world application.
**Multi-Step Reasoning**: Tackling problems that require a sequence of logical steps, enhancing its capability to process and solve more complex mathematical scenarios.
**Original Model Card**:
---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx
language:
- en
pipeline_tag: text-generation
---
## Model Summary
The language model phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## Intended Uses
Given the nature of the training data, phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
#### QA format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
#### Chat format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
#### Code format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes**
* phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks is out of the scope of this research project. As a result, phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
## Limitations of phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [flash-attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx).
### Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
inputs = tokenizer('''```python
def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with `torch.autocast()`, as follows:
```python
with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
outputs = model.generate(**inputs, max_length=200)
```
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` > 1).
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
```
|
maxspin/medichat | maxspin | 2023-11-19T14:54:03Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-19T14:13:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
- Here is the code to load the load along with llama2 chat model:
!pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 transformers==4.31.0 trl==0.4.7
import os, torch, logging
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline
from peft import LoraConfig, PeftModel
from trl import SFTTrainer
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
config = PeftConfig.from_pretrained(path_to_directory_containing adapter_config.json)
model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-chat-hf", return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf")
model = PeftModel.from_pretrained(model, "/content/llama-2-7b-medichat")
batch = tokenizer("My friend has been feeling a little dizzy these days.What could he pissibely be suffering from ?", return_tensors='pt')
batch = batch.to('cuda')
output_tokens = model.generate(**batch, max_new_tokens=300) ### You can adjust the max_new_tokens
output_response = tokenizer.decode(output_tokens[0], skip_special_tokens=True) |
hkivancoral/hushem_5x_deit_tiny_sgd_0001_fold1 | hkivancoral | 2023-11-19T14:53:02Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:47:52Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.26666666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5185
- Accuracy: 0.2667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5559 | 1.0 | 27 | 1.6659 | 0.2667 |
| 1.5307 | 2.0 | 54 | 1.6510 | 0.2667 |
| 1.5463 | 3.0 | 81 | 1.6380 | 0.2889 |
| 1.5241 | 4.0 | 108 | 1.6272 | 0.2889 |
| 1.4794 | 5.0 | 135 | 1.6169 | 0.2889 |
| 1.5071 | 6.0 | 162 | 1.6070 | 0.2889 |
| 1.4768 | 7.0 | 189 | 1.5986 | 0.2889 |
| 1.4869 | 8.0 | 216 | 1.5910 | 0.2889 |
| 1.4651 | 9.0 | 243 | 1.5844 | 0.3111 |
| 1.4396 | 10.0 | 270 | 1.5781 | 0.3111 |
| 1.4572 | 11.0 | 297 | 1.5728 | 0.3111 |
| 1.4029 | 12.0 | 324 | 1.5680 | 0.3111 |
| 1.4355 | 13.0 | 351 | 1.5638 | 0.3111 |
| 1.4582 | 14.0 | 378 | 1.5597 | 0.2889 |
| 1.4073 | 15.0 | 405 | 1.5561 | 0.2889 |
| 1.4381 | 16.0 | 432 | 1.5526 | 0.2889 |
| 1.4333 | 17.0 | 459 | 1.5495 | 0.2889 |
| 1.3978 | 18.0 | 486 | 1.5468 | 0.2889 |
| 1.3884 | 19.0 | 513 | 1.5441 | 0.2889 |
| 1.3796 | 20.0 | 540 | 1.5418 | 0.2889 |
| 1.4025 | 21.0 | 567 | 1.5397 | 0.2889 |
| 1.3822 | 22.0 | 594 | 1.5376 | 0.2889 |
| 1.3868 | 23.0 | 621 | 1.5359 | 0.2889 |
| 1.3907 | 24.0 | 648 | 1.5343 | 0.2889 |
| 1.38 | 25.0 | 675 | 1.5327 | 0.2667 |
| 1.3755 | 26.0 | 702 | 1.5313 | 0.2667 |
| 1.3485 | 27.0 | 729 | 1.5299 | 0.2667 |
| 1.3648 | 28.0 | 756 | 1.5287 | 0.2667 |
| 1.3797 | 29.0 | 783 | 1.5276 | 0.2667 |
| 1.3716 | 30.0 | 810 | 1.5265 | 0.2667 |
| 1.389 | 31.0 | 837 | 1.5256 | 0.2667 |
| 1.3813 | 32.0 | 864 | 1.5247 | 0.2667 |
| 1.3289 | 33.0 | 891 | 1.5240 | 0.2667 |
| 1.3517 | 34.0 | 918 | 1.5232 | 0.2667 |
| 1.3834 | 35.0 | 945 | 1.5225 | 0.2667 |
| 1.3458 | 36.0 | 972 | 1.5218 | 0.2667 |
| 1.3745 | 37.0 | 999 | 1.5212 | 0.2667 |
| 1.3761 | 38.0 | 1026 | 1.5207 | 0.2667 |
| 1.3726 | 39.0 | 1053 | 1.5203 | 0.2667 |
| 1.3125 | 40.0 | 1080 | 1.5199 | 0.2667 |
| 1.3599 | 41.0 | 1107 | 1.5196 | 0.2667 |
| 1.3277 | 42.0 | 1134 | 1.5193 | 0.2667 |
| 1.3748 | 43.0 | 1161 | 1.5191 | 0.2667 |
| 1.3689 | 44.0 | 1188 | 1.5188 | 0.2667 |
| 1.3379 | 45.0 | 1215 | 1.5187 | 0.2667 |
| 1.3358 | 46.0 | 1242 | 1.5186 | 0.2667 |
| 1.3497 | 47.0 | 1269 | 1.5185 | 0.2667 |
| 1.3482 | 48.0 | 1296 | 1.5185 | 0.2667 |
| 1.3616 | 49.0 | 1323 | 1.5185 | 0.2667 |
| 1.3216 | 50.0 | 1350 | 1.5185 | 0.2667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_tiny_sgd_00001_fold5 | hkivancoral | 2023-11-19T14:46:12Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:40:55Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.24390243902439024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7062
- Accuracy: 0.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.53 | 1.0 | 28 | 1.7652 | 0.2439 |
| 1.4658 | 2.0 | 56 | 1.7625 | 0.2439 |
| 1.4749 | 3.0 | 84 | 1.7598 | 0.2439 |
| 1.4869 | 4.0 | 112 | 1.7572 | 0.2439 |
| 1.4859 | 5.0 | 140 | 1.7548 | 0.2439 |
| 1.5155 | 6.0 | 168 | 1.7523 | 0.2439 |
| 1.4632 | 7.0 | 196 | 1.7499 | 0.2439 |
| 1.4958 | 8.0 | 224 | 1.7475 | 0.2439 |
| 1.538 | 9.0 | 252 | 1.7452 | 0.2439 |
| 1.5008 | 10.0 | 280 | 1.7432 | 0.2439 |
| 1.4793 | 11.0 | 308 | 1.7411 | 0.2439 |
| 1.483 | 12.0 | 336 | 1.7391 | 0.2439 |
| 1.4966 | 13.0 | 364 | 1.7374 | 0.2439 |
| 1.5231 | 14.0 | 392 | 1.7355 | 0.2439 |
| 1.5038 | 15.0 | 420 | 1.7337 | 0.2439 |
| 1.4896 | 16.0 | 448 | 1.7319 | 0.2439 |
| 1.5043 | 17.0 | 476 | 1.7303 | 0.2439 |
| 1.4967 | 18.0 | 504 | 1.7286 | 0.2439 |
| 1.5162 | 19.0 | 532 | 1.7269 | 0.2439 |
| 1.5126 | 20.0 | 560 | 1.7254 | 0.2439 |
| 1.4809 | 21.0 | 588 | 1.7239 | 0.2439 |
| 1.4877 | 22.0 | 616 | 1.7225 | 0.2439 |
| 1.5048 | 23.0 | 644 | 1.7212 | 0.2439 |
| 1.4932 | 24.0 | 672 | 1.7199 | 0.2439 |
| 1.4898 | 25.0 | 700 | 1.7187 | 0.2439 |
| 1.4408 | 26.0 | 728 | 1.7176 | 0.2439 |
| 1.5027 | 27.0 | 756 | 1.7165 | 0.2439 |
| 1.4716 | 28.0 | 784 | 1.7154 | 0.2439 |
| 1.5167 | 29.0 | 812 | 1.7145 | 0.2439 |
| 1.4795 | 30.0 | 840 | 1.7136 | 0.2439 |
| 1.5126 | 31.0 | 868 | 1.7127 | 0.2439 |
| 1.4908 | 32.0 | 896 | 1.7119 | 0.2439 |
| 1.4785 | 33.0 | 924 | 1.7111 | 0.2439 |
| 1.4672 | 34.0 | 952 | 1.7104 | 0.2439 |
| 1.4938 | 35.0 | 980 | 1.7097 | 0.2439 |
| 1.4756 | 36.0 | 1008 | 1.7092 | 0.2439 |
| 1.4385 | 37.0 | 1036 | 1.7087 | 0.2439 |
| 1.5268 | 38.0 | 1064 | 1.7082 | 0.2439 |
| 1.4939 | 39.0 | 1092 | 1.7078 | 0.2439 |
| 1.4888 | 40.0 | 1120 | 1.7074 | 0.2439 |
| 1.4584 | 41.0 | 1148 | 1.7071 | 0.2439 |
| 1.5033 | 42.0 | 1176 | 1.7068 | 0.2439 |
| 1.5098 | 43.0 | 1204 | 1.7066 | 0.2439 |
| 1.485 | 44.0 | 1232 | 1.7064 | 0.2439 |
| 1.4705 | 45.0 | 1260 | 1.7063 | 0.2439 |
| 1.4946 | 46.0 | 1288 | 1.7062 | 0.2439 |
| 1.4654 | 47.0 | 1316 | 1.7062 | 0.2439 |
| 1.5055 | 48.0 | 1344 | 1.7062 | 0.2439 |
| 1.4868 | 49.0 | 1372 | 1.7062 | 0.2439 |
| 1.489 | 50.0 | 1400 | 1.7062 | 0.2439 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Mediocreatmybest/instructblip-vicuna-7b_8bit | Mediocreatmybest | 2023-11-19T14:45:49Z | 24 | 3 | transformers | [
"transformers",
"safetensors",
"instructblip",
"image-text-to-text",
"image-to-text",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| image-to-text | 2023-07-22T10:10:09Z | ---
library_name: transformers
pipeline_tag: image-to-text
---
8-Bit saved version from: Salesforce/instructblip-vicuna-7b |
hkivancoral/hushem_5x_deit_tiny_sgd_00001_fold3 | hkivancoral | 2023-11-19T14:35:14Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:29:58Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.27906976744186046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6587
- Accuracy: 0.2791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5526 | 1.0 | 28 | 1.7069 | 0.2791 |
| 1.5173 | 2.0 | 56 | 1.7047 | 0.2791 |
| 1.5161 | 3.0 | 84 | 1.7025 | 0.2791 |
| 1.5055 | 4.0 | 112 | 1.7004 | 0.2791 |
| 1.4587 | 5.0 | 140 | 1.6983 | 0.2791 |
| 1.5199 | 6.0 | 168 | 1.6963 | 0.2791 |
| 1.5621 | 7.0 | 196 | 1.6943 | 0.2791 |
| 1.5165 | 8.0 | 224 | 1.6925 | 0.2791 |
| 1.5226 | 9.0 | 252 | 1.6906 | 0.2791 |
| 1.4955 | 10.0 | 280 | 1.6889 | 0.2791 |
| 1.5136 | 11.0 | 308 | 1.6873 | 0.2791 |
| 1.5328 | 12.0 | 336 | 1.6856 | 0.2791 |
| 1.4996 | 13.0 | 364 | 1.6840 | 0.2791 |
| 1.5073 | 14.0 | 392 | 1.6824 | 0.2791 |
| 1.566 | 15.0 | 420 | 1.6809 | 0.2791 |
| 1.501 | 16.0 | 448 | 1.6795 | 0.2791 |
| 1.4781 | 17.0 | 476 | 1.6780 | 0.2791 |
| 1.5327 | 18.0 | 504 | 1.6766 | 0.2791 |
| 1.4922 | 19.0 | 532 | 1.6753 | 0.2791 |
| 1.5682 | 20.0 | 560 | 1.6741 | 0.2791 |
| 1.4804 | 21.0 | 588 | 1.6729 | 0.2791 |
| 1.4661 | 22.0 | 616 | 1.6719 | 0.2791 |
| 1.5385 | 23.0 | 644 | 1.6708 | 0.2791 |
| 1.4844 | 24.0 | 672 | 1.6698 | 0.2791 |
| 1.583 | 25.0 | 700 | 1.6688 | 0.2791 |
| 1.4741 | 26.0 | 728 | 1.6678 | 0.2791 |
| 1.4816 | 27.0 | 756 | 1.6669 | 0.2791 |
| 1.4922 | 28.0 | 784 | 1.6662 | 0.2791 |
| 1.5132 | 29.0 | 812 | 1.6654 | 0.2791 |
| 1.4828 | 30.0 | 840 | 1.6647 | 0.2791 |
| 1.4775 | 31.0 | 868 | 1.6640 | 0.2791 |
| 1.4969 | 32.0 | 896 | 1.6634 | 0.2791 |
| 1.5111 | 33.0 | 924 | 1.6627 | 0.2791 |
| 1.4897 | 34.0 | 952 | 1.6621 | 0.2791 |
| 1.485 | 35.0 | 980 | 1.6616 | 0.2791 |
| 1.5295 | 36.0 | 1008 | 1.6612 | 0.2791 |
| 1.4993 | 37.0 | 1036 | 1.6607 | 0.2791 |
| 1.4874 | 38.0 | 1064 | 1.6603 | 0.2791 |
| 1.5091 | 39.0 | 1092 | 1.6600 | 0.2791 |
| 1.4861 | 40.0 | 1120 | 1.6597 | 0.2791 |
| 1.5191 | 41.0 | 1148 | 1.6595 | 0.2791 |
| 1.4786 | 42.0 | 1176 | 1.6593 | 0.2791 |
| 1.4918 | 43.0 | 1204 | 1.6591 | 0.2791 |
| 1.5394 | 44.0 | 1232 | 1.6589 | 0.2791 |
| 1.4962 | 45.0 | 1260 | 1.6588 | 0.2791 |
| 1.4846 | 46.0 | 1288 | 1.6588 | 0.2791 |
| 1.4732 | 47.0 | 1316 | 1.6587 | 0.2791 |
| 1.4909 | 48.0 | 1344 | 1.6587 | 0.2791 |
| 1.4794 | 49.0 | 1372 | 1.6587 | 0.2791 |
| 1.4677 | 50.0 | 1400 | 1.6587 | 0.2791 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_tiny_sgd_00001_fold2 | hkivancoral | 2023-11-19T14:29:45Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:24:33Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_sgd_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.2222222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6903
- Accuracy: 0.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5766 | 1.0 | 27 | 1.7230 | 0.2222 |
| 1.5482 | 2.0 | 54 | 1.7214 | 0.2222 |
| 1.545 | 3.0 | 81 | 1.7199 | 0.2222 |
| 1.5461 | 4.0 | 108 | 1.7185 | 0.2222 |
| 1.5083 | 5.0 | 135 | 1.7171 | 0.2222 |
| 1.533 | 6.0 | 162 | 1.7157 | 0.2222 |
| 1.5321 | 7.0 | 189 | 1.7144 | 0.2222 |
| 1.5594 | 8.0 | 216 | 1.7131 | 0.2222 |
| 1.5064 | 9.0 | 243 | 1.7118 | 0.2222 |
| 1.4954 | 10.0 | 270 | 1.7106 | 0.2222 |
| 1.5489 | 11.0 | 297 | 1.7094 | 0.2222 |
| 1.5003 | 12.0 | 324 | 1.7083 | 0.2222 |
| 1.5049 | 13.0 | 351 | 1.7073 | 0.2222 |
| 1.5566 | 14.0 | 378 | 1.7062 | 0.2222 |
| 1.4974 | 15.0 | 405 | 1.7052 | 0.2222 |
| 1.4964 | 16.0 | 432 | 1.7043 | 0.2222 |
| 1.5261 | 17.0 | 459 | 1.7033 | 0.2222 |
| 1.4775 | 18.0 | 486 | 1.7024 | 0.2222 |
| 1.5065 | 19.0 | 513 | 1.7015 | 0.2222 |
| 1.4755 | 20.0 | 540 | 1.7007 | 0.2222 |
| 1.5258 | 21.0 | 567 | 1.6999 | 0.2222 |
| 1.5082 | 22.0 | 594 | 1.6991 | 0.2222 |
| 1.4959 | 23.0 | 621 | 1.6984 | 0.2222 |
| 1.4864 | 24.0 | 648 | 1.6977 | 0.2222 |
| 1.5154 | 25.0 | 675 | 1.6970 | 0.2222 |
| 1.4838 | 26.0 | 702 | 1.6964 | 0.2222 |
| 1.4873 | 27.0 | 729 | 1.6958 | 0.2222 |
| 1.5062 | 28.0 | 756 | 1.6952 | 0.2222 |
| 1.4886 | 29.0 | 783 | 1.6947 | 0.2222 |
| 1.497 | 30.0 | 810 | 1.6942 | 0.2222 |
| 1.498 | 31.0 | 837 | 1.6937 | 0.2222 |
| 1.4952 | 32.0 | 864 | 1.6933 | 0.2222 |
| 1.492 | 33.0 | 891 | 1.6929 | 0.2222 |
| 1.4832 | 34.0 | 918 | 1.6925 | 0.2222 |
| 1.5095 | 35.0 | 945 | 1.6922 | 0.2222 |
| 1.5164 | 36.0 | 972 | 1.6919 | 0.2222 |
| 1.5027 | 37.0 | 999 | 1.6916 | 0.2222 |
| 1.5132 | 38.0 | 1026 | 1.6913 | 0.2222 |
| 1.5138 | 39.0 | 1053 | 1.6911 | 0.2222 |
| 1.4995 | 40.0 | 1080 | 1.6909 | 0.2222 |
| 1.4924 | 41.0 | 1107 | 1.6908 | 0.2222 |
| 1.4936 | 42.0 | 1134 | 1.6906 | 0.2222 |
| 1.5293 | 43.0 | 1161 | 1.6905 | 0.2222 |
| 1.5068 | 44.0 | 1188 | 1.6904 | 0.2222 |
| 1.5134 | 45.0 | 1215 | 1.6904 | 0.2222 |
| 1.4849 | 46.0 | 1242 | 1.6903 | 0.2222 |
| 1.5093 | 47.0 | 1269 | 1.6903 | 0.2222 |
| 1.4852 | 48.0 | 1296 | 1.6903 | 0.2222 |
| 1.5207 | 49.0 | 1323 | 1.6903 | 0.2222 |
| 1.4563 | 50.0 | 1350 | 1.6903 | 0.2222 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ash100/llm-v2-1 | ash100 | 2023-11-19T14:24:45Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-19T14:24:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
hkivancoral/hushem_5x_deit_tiny_rms_00001_fold5 | hkivancoral | 2023-11-19T14:16:43Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T14:11:18Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_rms_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8292682926829268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1081
- Accuracy: 0.8293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.957 | 1.0 | 28 | 0.7236 | 0.7073 |
| 0.3642 | 2.0 | 56 | 0.5185 | 0.8049 |
| 0.1944 | 3.0 | 84 | 0.5546 | 0.8049 |
| 0.0826 | 4.0 | 112 | 0.7838 | 0.7561 |
| 0.027 | 5.0 | 140 | 0.5372 | 0.8049 |
| 0.0125 | 6.0 | 168 | 0.5869 | 0.8293 |
| 0.0034 | 7.0 | 196 | 0.7015 | 0.8293 |
| 0.0012 | 8.0 | 224 | 0.6670 | 0.8049 |
| 0.0008 | 9.0 | 252 | 0.6919 | 0.8293 |
| 0.0006 | 10.0 | 280 | 0.7125 | 0.8293 |
| 0.0004 | 11.0 | 308 | 0.7267 | 0.8293 |
| 0.0004 | 12.0 | 336 | 0.7569 | 0.8293 |
| 0.0003 | 13.0 | 364 | 0.7526 | 0.8293 |
| 0.0003 | 14.0 | 392 | 0.7915 | 0.8293 |
| 0.0002 | 15.0 | 420 | 0.8002 | 0.8293 |
| 0.0002 | 16.0 | 448 | 0.8251 | 0.8293 |
| 0.0002 | 17.0 | 476 | 0.8438 | 0.8293 |
| 0.0001 | 18.0 | 504 | 0.8466 | 0.8293 |
| 0.0001 | 19.0 | 532 | 0.8704 | 0.8293 |
| 0.0001 | 20.0 | 560 | 0.8762 | 0.8293 |
| 0.0001 | 21.0 | 588 | 0.8972 | 0.8293 |
| 0.0001 | 22.0 | 616 | 0.8987 | 0.8293 |
| 0.0001 | 23.0 | 644 | 0.9318 | 0.8293 |
| 0.0001 | 24.0 | 672 | 0.9238 | 0.8293 |
| 0.0001 | 25.0 | 700 | 0.9169 | 0.8293 |
| 0.0 | 26.0 | 728 | 0.9411 | 0.8293 |
| 0.0 | 27.0 | 756 | 0.9447 | 0.8293 |
| 0.0 | 28.0 | 784 | 0.9671 | 0.8293 |
| 0.0 | 29.0 | 812 | 0.9709 | 0.8293 |
| 0.0 | 30.0 | 840 | 0.9844 | 0.8293 |
| 0.0 | 31.0 | 868 | 0.9959 | 0.8293 |
| 0.0 | 32.0 | 896 | 1.0060 | 0.8293 |
| 0.0 | 33.0 | 924 | 1.0055 | 0.8293 |
| 0.0 | 34.0 | 952 | 1.0143 | 0.8293 |
| 0.0 | 35.0 | 980 | 1.0276 | 0.8293 |
| 0.0 | 36.0 | 1008 | 1.0321 | 0.8293 |
| 0.0 | 37.0 | 1036 | 1.0476 | 0.8293 |
| 0.0 | 38.0 | 1064 | 1.0409 | 0.8293 |
| 0.0 | 39.0 | 1092 | 1.0558 | 0.8293 |
| 0.0 | 40.0 | 1120 | 1.0678 | 0.8293 |
| 0.0 | 41.0 | 1148 | 1.0832 | 0.8293 |
| 0.0 | 42.0 | 1176 | 1.0928 | 0.8293 |
| 0.0 | 43.0 | 1204 | 1.0842 | 0.8293 |
| 0.0 | 44.0 | 1232 | 1.0881 | 0.8293 |
| 0.0 | 45.0 | 1260 | 1.0924 | 0.8293 |
| 0.0 | 46.0 | 1288 | 1.1046 | 0.8293 |
| 0.0 | 47.0 | 1316 | 1.1089 | 0.8293 |
| 0.0 | 48.0 | 1344 | 1.1085 | 0.8293 |
| 0.0 | 49.0 | 1372 | 1.1081 | 0.8293 |
| 0.0 | 50.0 | 1400 | 1.1081 | 0.8293 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bitsoshka/StableLM-ProbPEFT-LOTR | bitsoshka | 2023-11-19T14:10:40Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:stabilityai/stablelm-3b-4e1t",
"base_model:adapter:stabilityai/stablelm-3b-4e1t",
"region:us"
]
| null | 2023-11-09T08:31:03Z | ---
library_name: peft
base_model: stabilityai/stablelm-3b-4e1t
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0
|
ass-a2s/em_german_70b_v01 | ass-a2s | 2023-11-19T14:07:22Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"german",
"deutsch",
"llama2",
"meta",
"facebook",
"de",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-19T13:14:02Z | ---
inference: false
language:
- de
library_name: transformers
license: llama2
model_creator: jphme
model_name: EM German
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
tags:
- pytorch
- german
- deutsch
- llama2
- meta
- facebook
---

# Table of Contents
1. [Introduction](#introduction)
2. [Links & Demos](#links--demos)
- [Model Links](#model-links)
- [Demos](#demos)
3. [Prompt Format](#prompt-format)
4. [Example Output](#example-output)
5. [Acknowledgements](#acknowledgements)
6. [Contact](#contact)
7. [Disclaimer](#disclaimer)
# Introduction
**EM German** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
We offer versions based on 7b, 13b and 70b Llama-2, Mistral and LeoLM (Llama-2/Mistral with continued pretraining on German texts) models.
Please find all Informations, Example Outputs, the special RAG prompt format, output examples and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). ([Deutsche Version](https://github.com/jphme/EM_German/blob/main/README_DE.md)). You will also find instructions on how to run the models with a GUI (GPT4All/LM Studio).
# Links & Demos
## Model Links
Should you only try one model version, I strongly recommend the **[LeoLM Mistral](https://huggingface.co/jphme/em_german_leo_mistral)** model which offers by far the best combination of performance and computing requirements!
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) |
| Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) |
| Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) |
| [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | soon | soon | [Link](https://huggingface.co/jphme/em_german_13b_leo_gguf) | tbc |
| [LeoLM Mistral](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_leo_mistral) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-AWQ) |
### Notes about the different versions:
See also the [comparison of example outputs](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md) for a comparison of (7b) model capabilities.
If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase.
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* [Example Colab Notebook for 7b Mistral GGUF with Grammar-based structured output](https://colab.research.google.com/drive/17O-a3D4BnVc_V9Of3dFfed18v1ULu1Sv?usp=sharing)
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Example Output
Please find a selection of various example outputs for all 7b versions of EM German [here](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md).
**Prompt:**
> Erkläre die Bedeutung von drei wesentlichen Elementen der Bilanzierung und des Accountings nach IFRS für Unternehmen, die auf internationaler Ebene tätig sind.
**Antwort EM German Leo Mistral 7b:**
```
Die International Financial Reporting Standards (IFRS) sind eine Reihe von internationalen Rechnungslegungsstandards, die von der International Accounting Standards Board (IASB) entwickelt wurden. Sie sollen dazu beitragen, dass Finanzberichte vergleichbar, zuverlässig und relevant sind.
1. Fair Value: Dieses Konzept ist ein zentraler Bestandteil der IFRS. Es besagt, dass Vermögenswerte und Schulden zum fairen Wert bilanziert werden sollten, was dem Preis entspricht, zu dem sie zwischen Marktteilnehmern gehandelt werden könnten. Dadurch wird sichergestellt, dass das Unternehmen eine genaue Darstellung seiner finanziellen Lage liefert.
2. Konsolidierung: Die IFRS verlangen von Unternehmen, ihre Finanzergebnisse auf konsolidierter Basis darzustellen. Das bedeutet, dass alle Tochtergesellschaften und verbundenen Unternehmen in die Bilanz des Mutterunternehmens aufgenommen werden. Dadurch wird ein klareres Bild von der finanziellen Situation des Konzerns geschaffen.
3. Vollständigkeitsprinzip: Die IFRS verlangen von Unternehmen, alle relevanten Informationen offenzulegen. Dies umfasst sowohl positive als auch negative Faktoren, wie zum Beispiel Gewinne und Verluste, Einnahmen und Ausgaben sowie Risiken und Unsicherheiten. Durch dieses Prinzip wird sichergestellt, dass Investoren, Kreditgeber und andere Stakeholder eine umfassende Einschätzung der finanziellen Lage des Unternehmens vornehmen können.
```
**Prompt:**
> Ein Mensch atmet durchschnittlich 16 Mal pro Minute. Wie oft atmet er an einem Tag?
**Antwort EM German Leo Mistral 7b:**
```
Zuerst berechnen wir, wie oft eine Person in einer Stunde atmet. Wir wissen, dass eine Stunde 60 Minuten hat, also:
Atemzüge pro Stunde = Atemfrequenz * Minuten pro Stunde
= 16 Atemzüge pro Minute * 60 Minuten
= 960 Atemzüge
Um herauszufinden, wie viele Atemzüge eine Person an einem Tag macht, müssen wir diese stündliche Rate mit der Anzahl der Stunden an einem Tag multiplizieren, also 24:
Atemzüge pro Tag = Atemzüge pro Stunde * Stunden pro Tag
= 960 Atemzüge * 24 Stunden
= 23.040 Atemzüge
Also macht ein durchschnittlicher Mensch etwa 23.040 Atemzüge an einem Tag.
```
(For more examples, please visit [our Github Repository](https://github.com/jphme/EM_German).)
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
For detailed feedback & feature requests, please open an issue or get in contact with me via [my website](https://www.jph.me).
*PS: We are also always interested in support for our startup [ellamind](https://ellamind.com), which will offer customized models for business applications in the future (we are currently still in stealth mode). If you use our models for business applications and have advanced needs for specialized capabilities, please get in touch.*
# Disclaimer:
I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files. |
Emilie-Amandine/distilbert-base-uncased-finetuned-squad | Emilie-Amandine | 2023-11-19T13:59:18Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-11-19T10:54:22Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2297 | 1.0 | 5533 | 1.1525 |
| 0.9631 | 2.0 | 11066 | 1.1263 |
| 0.7327 | 3.0 | 16599 | 1.1576 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.13.2
|
NourElsennary/elsennary | NourElsennary | 2023-11-19T13:57:44Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:53:47Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### elsennary Dreambooth model trained by NourElsennary with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Ahmed1Azab/big | Ahmed1Azab | 2023-11-19T13:48:33Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:43:38Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### BIG Dreambooth model trained by Ahmed1Azab with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
hkivancoral/hushem_5x_deit_tiny_rms_0001_fold5 | hkivancoral | 2023-11-19T13:46:54Z | 20 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T13:39:21Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8780487804878049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8418
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4959 | 1.0 | 28 | 1.3943 | 0.2439 |
| 1.4016 | 2.0 | 56 | 1.1344 | 0.4146 |
| 0.8658 | 3.0 | 84 | 0.7965 | 0.7073 |
| 0.4771 | 4.0 | 112 | 0.8323 | 0.7073 |
| 0.2251 | 5.0 | 140 | 0.3450 | 0.8537 |
| 0.161 | 6.0 | 168 | 1.1238 | 0.5854 |
| 0.122 | 7.0 | 196 | 0.5699 | 0.8780 |
| 0.0493 | 8.0 | 224 | 1.1342 | 0.6585 |
| 0.0526 | 9.0 | 252 | 1.0124 | 0.8293 |
| 0.0543 | 10.0 | 280 | 1.4922 | 0.8537 |
| 0.0179 | 11.0 | 308 | 0.4348 | 0.9024 |
| 0.001 | 12.0 | 336 | 0.7229 | 0.8293 |
| 0.0002 | 13.0 | 364 | 0.6260 | 0.8780 |
| 0.0001 | 14.0 | 392 | 0.6381 | 0.8780 |
| 0.0001 | 15.0 | 420 | 0.6479 | 0.8780 |
| 0.0001 | 16.0 | 448 | 0.6572 | 0.8780 |
| 0.0001 | 17.0 | 476 | 0.6669 | 0.8780 |
| 0.0 | 18.0 | 504 | 0.6750 | 0.8780 |
| 0.0 | 19.0 | 532 | 0.6816 | 0.8780 |
| 0.0 | 20.0 | 560 | 0.6897 | 0.8780 |
| 0.0 | 21.0 | 588 | 0.6973 | 0.8780 |
| 0.0 | 22.0 | 616 | 0.7042 | 0.8780 |
| 0.0 | 23.0 | 644 | 0.7114 | 0.8780 |
| 0.0 | 24.0 | 672 | 0.7182 | 0.8780 |
| 0.0 | 25.0 | 700 | 0.7246 | 0.8780 |
| 0.0 | 26.0 | 728 | 0.7318 | 0.8780 |
| 0.0 | 27.0 | 756 | 0.7392 | 0.8780 |
| 0.0 | 28.0 | 784 | 0.7454 | 0.8780 |
| 0.0 | 29.0 | 812 | 0.7524 | 0.8780 |
| 0.0 | 30.0 | 840 | 0.7588 | 0.8780 |
| 0.0 | 31.0 | 868 | 0.7650 | 0.8780 |
| 0.0 | 32.0 | 896 | 0.7711 | 0.8780 |
| 0.0 | 33.0 | 924 | 0.7761 | 0.8780 |
| 0.0 | 34.0 | 952 | 0.7832 | 0.8780 |
| 0.0 | 35.0 | 980 | 0.7888 | 0.8780 |
| 0.0 | 36.0 | 1008 | 0.7948 | 0.8780 |
| 0.0 | 37.0 | 1036 | 0.8020 | 0.8780 |
| 0.0 | 38.0 | 1064 | 0.8075 | 0.8780 |
| 0.0 | 39.0 | 1092 | 0.8124 | 0.8780 |
| 0.0 | 40.0 | 1120 | 0.8170 | 0.8780 |
| 0.0 | 41.0 | 1148 | 0.8223 | 0.8780 |
| 0.0 | 42.0 | 1176 | 0.8266 | 0.8780 |
| 0.0 | 43.0 | 1204 | 0.8300 | 0.8780 |
| 0.0 | 44.0 | 1232 | 0.8327 | 0.8780 |
| 0.0 | 45.0 | 1260 | 0.8359 | 0.8780 |
| 0.0 | 46.0 | 1288 | 0.8390 | 0.8780 |
| 0.0 | 47.0 | 1316 | 0.8409 | 0.8780 |
| 0.0 | 48.0 | 1344 | 0.8418 | 0.8780 |
| 0.0 | 49.0 | 1372 | 0.8418 | 0.8780 |
| 0.0 | 50.0 | 1400 | 0.8418 | 0.8780 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Omar56AI2023/tok | Omar56AI2023 | 2023-11-19T13:45:22Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:38:49Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### TOK Dreambooth model trained by Omar56AI2023 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
MUSTAFAxWxE21/jeanne-gang | MUSTAFAxWxE21 | 2023-11-19T13:40:34Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:36:41Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### JEANNE_GANG- Dreambooth model trained by MUSTAFAxWxE21 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Kaiserzc/Abortion | Kaiserzc | 2023-11-19T13:35:41Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-11-19T13:33:54Z | ---
license: other
license_name: ggs
license_link: LICENSE
---
|
Iralighten/test-besk-1 | Iralighten | 2023-11-19T13:31:15Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:26:56Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### TEST-BESK-1 Dreambooth model trained by Iralighten with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
lakelz/llama-2-7b-odyssey-1.0 | lakelz | 2023-11-19T13:24:03Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-11-19T13:23:30Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
TheBloke/Capybara-Tess-Yi-34B-200K-AWQ | TheBloke | 2023-11-19T13:23:11Z | 10 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:brucethemoose/Capybara-Tess-Yi-34B-200K",
"base_model:quantized:brucethemoose/Capybara-Tess-Yi-34B-200K",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-19T11:33:46Z | ---
base_model: brucethemoose/Capybara-Tess-Yi-34B-200K
inference: false
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
model_creator: brucethemoose
model_name: Capybara Tess Yi 34B 200K
model_type: yi
pipeline_tag: text-generation
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Capybara Tess Yi 34B 200K - AWQ
- Model creator: [brucethemoose](https://huggingface.co/brucethemoose)
- Original model: [Capybara Tess Yi 34B 200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K)
<!-- description start -->
## Description
This repo contains AWQ model files for [brucethemoose's Capybara Tess Yi 34B 200K](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-GGUF)
* [brucethemoose's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 19.23 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Capybara-Tess-Yi-34B-200K-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Capybara-Tess-Yi-34B-200K-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Capybara-Tess-Yi-34B-200K-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Capybara-Tess-Yi-34B-200K-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Capybara-Tess-Yi-34B-200K-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Capybara-Tess-Yi-34B-200K-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: brucethemoose's Capybara Tess Yi 34B 200K
**NousResearch/Nous-Capybara-34B** and **migtissera/Tess-M-Creative-v1.0** ties merged with mergekit, using the following config:
```
models:
- model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.0
parameters:
density: 0.6
weight: 1.0
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
density: 0.6
weight: 1.0
merge_method: ties
base_model: //home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
normalize: true
int8_mask: true
dtype: float16
```
Both are 200K context models with Vicuna syntax, so:
# Prompt Format:
```
SYSTEM: ...
USER: ...
ASSISTANT: ...
```
Stop token: `</s>`
***
Credits:
https://github.com/cg123/mergekit
https://huggingface.co/NousResearch/Nous-Capybara-34B/discussions
https://huggingface.co/migtissera/Tess-M-Creative-v1.0
https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
https://huggingface.co/01-ai/Yi-34B-200K
|
RabbitHole1412/food_classifier | RabbitHole1412 | 2023-11-19T13:18:09Z | 5 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T12:46:05Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: RabbitHole1412/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RabbitHole1412/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3840
- Validation Loss: 0.3736
- Train Accuracy: 0.905
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7731 | 1.5926 | 0.841 | 0 |
| 1.2071 | 0.8780 | 0.845 | 1 |
| 0.6676 | 0.5110 | 0.904 | 2 |
| 0.4765 | 0.3771 | 0.916 | 3 |
| 0.3840 | 0.3736 | 0.905 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_5x_deit_tiny_rms_0001_fold1 | hkivancoral | 2023-11-19T13:15:25Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-19T13:07:35Z | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_5x_deit_tiny_rms_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6444444444444445
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_tiny_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7423
- Accuracy: 0.6444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4819 | 1.0 | 27 | 1.3858 | 0.4222 |
| 1.2591 | 2.0 | 54 | 1.5267 | 0.3556 |
| 0.7593 | 3.0 | 81 | 1.2907 | 0.4667 |
| 0.5581 | 4.0 | 108 | 1.8771 | 0.5111 |
| 0.2708 | 5.0 | 135 | 1.1107 | 0.6 |
| 0.0918 | 6.0 | 162 | 1.6349 | 0.6 |
| 0.0815 | 7.0 | 189 | 1.8415 | 0.5556 |
| 0.0759 | 8.0 | 216 | 2.0598 | 0.5778 |
| 0.0537 | 9.0 | 243 | 1.9632 | 0.6222 |
| 0.0015 | 10.0 | 270 | 1.8818 | 0.6444 |
| 0.0003 | 11.0 | 297 | 2.0815 | 0.6222 |
| 0.0001 | 12.0 | 324 | 2.0650 | 0.6444 |
| 0.0001 | 13.0 | 351 | 2.0989 | 0.6444 |
| 0.0001 | 14.0 | 378 | 2.1289 | 0.6444 |
| 0.0001 | 15.0 | 405 | 2.1588 | 0.6444 |
| 0.0001 | 16.0 | 432 | 2.1838 | 0.6222 |
| 0.0001 | 17.0 | 459 | 2.2142 | 0.6444 |
| 0.0 | 18.0 | 486 | 2.2371 | 0.6444 |
| 0.0 | 19.0 | 513 | 2.2604 | 0.6444 |
| 0.0 | 20.0 | 540 | 2.2825 | 0.6444 |
| 0.0 | 21.0 | 567 | 2.3034 | 0.6444 |
| 0.0 | 22.0 | 594 | 2.3271 | 0.6444 |
| 0.0 | 23.0 | 621 | 2.3489 | 0.6444 |
| 0.0 | 24.0 | 648 | 2.3707 | 0.6444 |
| 0.0 | 25.0 | 675 | 2.3919 | 0.6444 |
| 0.0 | 26.0 | 702 | 2.4064 | 0.6444 |
| 0.0 | 27.0 | 729 | 2.4258 | 0.6444 |
| 0.0 | 28.0 | 756 | 2.4479 | 0.6444 |
| 0.0 | 29.0 | 783 | 2.4665 | 0.6444 |
| 0.0 | 30.0 | 810 | 2.4872 | 0.6444 |
| 0.0 | 31.0 | 837 | 2.5073 | 0.6444 |
| 0.0 | 32.0 | 864 | 2.5259 | 0.6444 |
| 0.0 | 33.0 | 891 | 2.5455 | 0.6444 |
| 0.0 | 34.0 | 918 | 2.5641 | 0.6444 |
| 0.0 | 35.0 | 945 | 2.5817 | 0.6444 |
| 0.0 | 36.0 | 972 | 2.6001 | 0.6444 |
| 0.0 | 37.0 | 999 | 2.6164 | 0.6444 |
| 0.0 | 38.0 | 1026 | 2.6335 | 0.6444 |
| 0.0 | 39.0 | 1053 | 2.6484 | 0.6444 |
| 0.0 | 40.0 | 1080 | 2.6642 | 0.6444 |
| 0.0 | 41.0 | 1107 | 2.6789 | 0.6444 |
| 0.0 | 42.0 | 1134 | 2.6927 | 0.6444 |
| 0.0 | 43.0 | 1161 | 2.7058 | 0.6444 |
| 0.0 | 44.0 | 1188 | 2.7171 | 0.6444 |
| 0.0 | 45.0 | 1215 | 2.7264 | 0.6444 |
| 0.0 | 46.0 | 1242 | 2.7343 | 0.6444 |
| 0.0 | 47.0 | 1269 | 2.7400 | 0.6444 |
| 0.0 | 48.0 | 1296 | 2.7423 | 0.6444 |
| 0.0 | 49.0 | 1323 | 2.7423 | 0.6444 |
| 0.0 | 50.0 | 1350 | 2.7423 | 0.6444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
gu-ai/hassan-test | gu-ai | 2023-11-19T13:13:16Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-19T13:06:39Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hassan-test Dreambooth model trained by gu-ai with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
MayIBorn/mrpc-debertav3_initialize_dW_A_with_svd_from_back | MayIBorn | 2023-11-19T12:59:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/deberta-v3-base",
"base_model:adapter:microsoft/deberta-v3-base",
"region:us"
]
| null | 2023-11-19T12:59:06Z | ---
library_name: peft
base_model: microsoft/deberta-v3-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
AmanMussa/llama2-kazakh-7b | AmanMussa | 2023-11-19T12:40:06Z | 15 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"kk",
"dataset:AmanMussa/kazakh-instruction-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-16T15:20:53Z | ---
license: mit
datasets:
- AmanMussa/kazakh-instruction-v1
language:
- kk
metrics:
- code_eval
pipeline_tag: text-generation
---
# Model Card for Model ID
LLAMA2 model for Kazakh Language
## Model Details
This model is from Meta LLAMA 2 parameter-efficient fine-tuning with Kazakh Language.
### Model Description
- **Developed by:** Mussa Aman
- **Model type:** Question Answering.
- **Language(s) (NLP):** Kazakh
- **License:** MIT
- **Finetuned from model [optional]:** Meta LLAMA 2
### Model Sources [optional]
### Out-of-Scope Use
There are still some mistakes during the inference process.
## Bias, Risks, and Limitations
The parameter size could be larger, and the dataset need to be optimized.
### Training Data

## Evaluation
Run summary:
train/epoch 1.0
train/global_step 3263
train/learning_rate 0.0
train/loss 0.975
train/total_flos 5.1749473473500774e+17
train/train_loss 0.38281
train/train_runtime 13086.8735
train/train_samples_per_second 3.989
train/train_steps_per_second 0.249
## Environment
- **Hardware Type:** NVIDIA A100 40GB
- **Hours used:** 10 hours
- **Cloud Provider:** Google Colab
## Citation [optional]
Citation
BibTeX:
@misc{aman_2023, author = {Aman Mussa}, title = {Self-instruct data pairs for Kazakh language}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1}}, }
APA:
Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1
## Model Card Contact
Please contact in email: [email protected] |
CodeinJax/pretrained_bert_scratch-finetuned-sst2 | CodeinJax | 2023-11-19T12:26:32Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:ayushmodi12/pretrained_bert_scratch",
"base_model:finetune:ayushmodi12/pretrained_bert_scratch",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-18T18:59:59Z | ---
license: apache-2.0
base_model: ayushmodi12/pretrained_bert_scratch
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: pretrained_bert_scratch-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pretrained_bert_scratch-finetuned-sst2
This model is a fine-tuned version of [ayushmodi12/pretrained_bert_scratch](https://huggingface.co/ayushmodi12/pretrained_bert_scratch) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6959 | 1.0 | 4210 | 0.6945 | 0.4908 |
| 0.6945 | 2.0 | 8420 | 0.7219 | 0.5092 |
| 0.6963 | 3.0 | 12630 | 0.7028 | 0.5092 |
| 0.699 | 4.0 | 16840 | 0.6938 | 0.5092 |
| 0.6938 | 5.0 | 21050 | 0.7048 | 0.5092 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
vildanh/az_gpt_alpaca | vildanh | 2023-11-19T12:21:02Z | 1 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-11-19T12:00:16Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
DayStay/LunarLander-v2 | DayStay | 2023-11-19T12:04:40Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T12:04:35Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -169.31 +/- 102.39
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'test'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'DayStay/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
lordjia/by-feng-zikai | lordjia | 2023-11-19T11:28:34Z | 89 | 7 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"style",
"ink painting",
"comic",
"manhua",
"feng zikai",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-11-19T11:28:33Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- style
- ink painting
- comic
- manhua
- feng zikai
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: by Feng Zikai
widget:
- text: 'Batman at night, by Feng Zikai '
output:
url: >-
3495572.jpeg
- text: 'iron man flying over the city, by Feng Zikai '
output:
url: >-
3495565.jpeg
- text: 'spider man jumping among buildings, by Feng Zikai '
output:
url: >-
3495566.jpeg
- text: 'wonder woman fighting against super man, by Feng Zikai '
output:
url: >-
3495571.jpeg
- text: 'Arcee in Transformers, by Feng Zikai '
output:
url: >-
3495811.jpeg
- text: 'Hot Rod in Transformers, by Feng Zikai '
output:
url: >-
3495810.jpeg
- text: 'Bumblebee, Transformers, by Feng Zikai '
output:
url: >-
3495813.jpeg
- text: 'Optimus Prime, Transformers, by Feng Zikai '
output:
url: >-
3495812.jpeg
- text: 'city view of hongkong, bay view, at night, by Feng Zikai '
output:
url: >-
3495822.jpeg
- text: 'city view of london, river view, by Feng Zikai '
output:
url: >-
3495823.jpeg
---
# 丰子恺漫画 - By FENG Zikai
<Gallery />
<p>丰子恺(<a target="_blank" rel="ugc" href="https://zh.wikipedia.org/zh-hans/%E8%B1%90%E5%AD%90%E6%84%B7">维基百科</a>),中国散文家、画家、文学家、美术家与音乐教育家。师从弘一法师李叔同,以中西融合画法创作漫画及散文而著名,是中国漫画艺术的先驱。他的漫画造形简约,画风朴实,饶富童趣,在众多画家中,独树一格。此 LoRA 意在模仿其标志性水墨漫画风格,希望给喜欢丰子恺作品的朋友带来快乐。</p><p>基于 <strong><span style="color:rgb(253, 126, 20)">SDXL 1.0</span></strong> checkpoint。使用时,请加上触发词:<strong><span style="color:rgb(253, 126, 20)">by Feng Zikai</span></strong>,推荐权重(weight)<strong><span style="color:rgb(253, 126, 20)">0.8-1.0</span></strong></p><p>Prompt 示例:</p><pre><code>Batman at night, by Feng Zikai <lora:fengzikai_v1.0_XL:0.8></code></pre><hr /><p>Feng Zikai (<a target="_blank" rel="ugc" href="https://en.wikipedia.org/wiki/Feng_Zikai">Wikipedia</a>), Chinese essayist, painter, writer, artist and music educator. He studied under Master Hongyi Li Shutong and is famous for his comics and prose creations that combine Chinese and Western painting techniques. He is a pioneer of Chinese comics art. His comics have simple shapes, simple style, and full of childishness, making him unique among many painters. This LoRA is intended to imitate his iconic ink comic style, hoping to bring happiness to friends who like Feng Zikai’s works.</p><p>Based on <strong><span style="color:rgb(253, 126, 20)">SDXL 1.0</span></strong> checkpoint. When using, please add trigger words: <strong><span style="color:rgb(253, 126, 20)">by Feng Zikai</span></strong><span style="color:rgb(209, 213, 219)">, with a recommended weight of </span><strong><span style="color:rgb(253, 126, 20)">0.8~1.0</span></strong><span style="color:rgb(209, 213, 219)">.</span></p><p>Prompt sample:</p><pre><code>Batman at night, by Feng Zikai <lora:fengzikai_v1.0_XL:0.8></code></pre>
## Image examples for the model:

> iron man flying over the city, by Feng Zikai

> spider man jumping among buildings, by Feng Zikai

> wonder woman fighting against super man, by Feng Zikai

> Arcee in Transformers, by Feng Zikai

> Hot Rod in Transformers, by Feng Zikai

> Bumblebee, Transformers, by Feng Zikai

> Optimus Prime, Transformers, by Feng Zikai

> city view of hongkong, bay view, at night, by Feng Zikai

> city view of london, river view, by Feng Zikai
|
metsmania/distilbert-base-uncased-finetuned-emotion | metsmania | 2023-11-19T11:21:10Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-19T09:34:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9256217595924592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.808 | 1.0 | 250 | 0.3099 | 0.9065 | 0.9049 |
| 0.2477 | 2.0 | 500 | 0.2157 | 0.9255 | 0.9256 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
shadowlilac/visor | shadowlilac | 2023-11-19T11:04:17Z | 29 | 2 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"image-captioning",
"anime",
"image-to-text",
"dataset:shadowlilac/anime",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2023-11-14T23:26:38Z | ---
pipeline_tag: image-to-text
tags:
- image-captioning
- anime
license: other
license_name: shadowlilac-extension-bsd-3
license_link: LICENSE
datasets:
- shadowlilac/anime
---
# Visor - Natural language Anime Tagging
Visor is a natural-language-based image tagging model based on the BLIP model architecture.
Potential Use cases can be to caption anime images for training diffusion models |
kelzla/Mistral-7B-v0.1-odyssey-1.0 | kelzla | 2023-11-19T10:55:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
]
| null | 2023-11-19T08:32:46Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
iampedroalz/q-Taxi-v3 | iampedroalz | 2023-11-19T10:54:22Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-19T10:46:22Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="iampedroalz/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
iampedroalz/q-FrozenLake-v1-4x4-noSlippery | iampedroalz | 2023-11-19T10:44:05Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-18T12:49:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="iampedroalz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yorimariia/real-faces-women-ai | yorimariia | 2023-11-19T10:41:56Z | 0 | 1 | null | [
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-11-19T10:40:33Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Real_faces_women_ai Dreambooth model trained by yorimariia with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Subsets and Splits