modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-08 12:28:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 415
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-08 12:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nutanix/llama3-8b-instruct-15000-context-length | Nutanix | "2024-07-11T20:08:01Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:finetune:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-11T19:56:19Z" | ---
base_model: unsloth/llama-3-8b-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Nutanix
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joshhu1123/Llama-2-7b-chat-hf-Qlora-BI55-BImedqa-no4 | joshhu1123 | "2023-10-14T03:12:42Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:joshhu1123/Llama-2-7b-chat-hf-Qlora-BI55",
"base_model:adapter:joshhu1123/Llama-2-7b-chat-hf-Qlora-BI55",
"region:us"
] | null | "2023-10-14T03:12:34Z" | ---
library_name: peft
base_model: joshhu1123/Llama-2-7b-chat-hf-Qlora-BI55
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
danielkosyra/cosine_2000_9e-4_16b_w0.08 | danielkosyra | "2024-07-03T19:34:25Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-03T19:34:05Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: cosine_2000_9e-4_16b_w0.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cosine_2000_9e-4_16b_w0.08
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.293 | 0.7930 | 250 | 4.7425 |
| 4.0871 | 1.5860 | 500 | 3.5106 |
| 3.275 | 2.3791 | 750 | 3.1563 |
| 2.967 | 3.1721 | 1000 | 2.9887 |
| 2.7476 | 3.9651 | 1250 | 2.8838 |
| 2.5287 | 4.7581 | 1500 | 2.8292 |
| 2.3976 | 5.5511 | 1750 | 2.8038 |
| 2.3199 | 6.3442 | 2000 | 2.7986 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
magnifi/Phi3_intent_v49_1_w_unknown_6_lr_0.002 | magnifi | "2024-12-30T16:44:45Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-30T16:42:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TobiGeth/tg_user_706551794_lora_1740710919 | TobiGeth | "2025-02-28T03:01:13Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-28T03:01:11Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_706551794_1740710919
---
# Tg_User_706551794_Lora_1740710919
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_706551794_1740710919` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_706551794_lora_1740710919', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
huggingtweets/nuclearkatie | huggingtweets | "2022-10-26T16:33:35Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-10-26T16:28:44Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/nuclearkatie/1666801970584/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1334988663629942789/nDPoGclx_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Katie 🎃Boo👻-mah</div>
<div style="text-align: center; font-size: 14px;">@nuclearkatie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Katie 🎃Boo👻-mah.
| Data | Katie 🎃Boo👻-mah |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 1130 |
| Short tweets | 225 |
| Tweets kept | 1850 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vtpuc3cq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nuclearkatie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1vpu6vsq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1vpu6vsq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nuclearkatie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged-gguf | kenhktsui | "2024-12-30T15:16:42Z" | 11 | 1 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"dataset:kenhktsui/longtalk-cot-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-12T18:46:33Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
datasets:
- kenhktsui/longtalk-cot-v0.1
---
# kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged-gguf
With 1 epoch of SFT, it boosts processbench GSM-8k subset by 26%. However, it results in worse result in GSM-8k.
It seems to be either 1) the metrics are not measuring correctly due to parsing or 2) sign of overfitting answer, as it is not able to critque reasoning step.
||[kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged](https://huggingface.co/kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged)|[Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)|
|---|---|---|
|ProcessBench (GSM-8k)|**46.0**|36.5 (reported)|
|GSM-8k|78.54|**84.23**|
This needs to be investigated further.
I have done inspection of how **model behavior changed** after SFT. As expected, the finetuned model thinks much longer, and often solve a problem with multiple approaches instead of one.
### Quick Test 1: Is 9.11 greater than 9.9?
<details>
<summary> response from kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged-gguf </summary>
No, 9.11 is not greater than 9.9. Let's think about this step by step.
First, it helps to understand that both numbers are decimal numbers, specifically they have two digits after the decimal point. So, 9.11 and 9.9 can be compared directly based on their
values in each place value: whole number part and the tenths and hundredths parts.
Let's break them down:
- For 9.11:
- The whole number part is 9.
- The tenths place is 1 (which means one tenth).
- The hundredths place is also 1 (one hundredth).
- For 9.9:
- The whole number part is also 9.
- But here, the tenths place is 9 (nine tenths), and there's no hundredths digit after that; it's considered to have zero in the hundredths place because decimal numbers with fewer digits
are filled with zeros.
So, comparing them:
- Both have the same whole number part: 9.
- For the tenth place:
- 1 vs. 9.
Since 9 is greater than 1, this already tells us that 9.9 is larger than 9.11. But to be thorough, let's look at the hundredths places as well:
- In 9.11: hundredths = 1
- In 9.9: hundredths = 0
Even though there are no digits in the hundredths place for 9.9, it's still considered zero because of the way decimals work.
Therefore, overall, 9.11 < 9.9.
Wait a minute, but I should make sure that I'm not missing anything here. Sometimes, people might think about adding or subtracting numbers differently, but in this case, since both numbers
have the same whole number part and one has a larger tenths digit, it's clear which is larger.
Another way to look at it is to consider them as percentages:
- 9.11 = 911%
- 9.9 = 990%
But that might be confusing; in percentage terms, 990% is still greater than 911%.
Alternatively, if we multiply both numbers by 100 to eliminate the decimal point:
- 9.11 * 100 = 911
- 9.9 * 100 = 990
Again, 990 is greater than 911.
So, no matter how we look at it, 9.9 is larger than 9.11.
I think that covers the comparison thoroughly. There doesn't seem to be any trick or special case here; it's a straightforward decimal comparison once you understand place values.
**Final Answer**
\[ \boxed{9.11 < 9.9} \]
</details>
<details>
<summary> response from response from Qwen/Qwen2.5-7B-Instruct-GGUF </summary>
No, 9.11 is not greater than 9.9. When comparing decimal numbers, you can compare them digit by digit from left to right. In this case:
- The whole number part of both numbers is the same (9).
- Comparing the tenths place, we see that 1 in 9.11 is larger than 9 in 9.9.
- Therefore, 9.11 > 9.9.
So, 9.11 is greater than 9.9.
</details>
### Quick Test 2: How many letters R in the word strawberry?
<details>
<summary> response from kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged-gguf </summary>
Let's see. The word is "strawberry." I need to count how many 'R' letters are there in this word.
First, let me write down the word again: s-t-r-a-w-b-e-r-r-y.
Now, I'll go through each letter and check if it's an 'R'.
Starting with the first letter: S - that's not R.
Second letter: T - also not R.
Third letter: R - yes, that's an 'R'. So, one so far.
Fourth letter: A - nope.
Fifth letter: W - still not R.
Sixth letter: B - not R either.
Seventh letter: E - no.
Eighth letter: R - another 'R'! That makes two now.
Ninth letter: B - no R here.
Tenth letter: E - no R.
Eleventh letter: R - yet again, an 'R'. Now it's three.
Twelfth letter: Y - not R.
So, counting all the way through, I've found three 'R' letters in "strawberry."
Wait a minute, let me double-check. Maybe there are more 'R's that I missed.
Looking at "s-t-r-a-w-b-e-r-r-y." Yes, I did find three: 3rd, 8th, and 11th positions.
I think that's it. There don't seem to be any other 'R's in the word.
Therefore, there are 3 letters R in the word "strawberry."
**Final Answer**
\[ \boxed{3} \]
</details>
<details>
<summary> response from Qwen/Qwen2.5-7B-Instruct-GGUF </summary>
The letter "R" appears 1 time in the word "strawberry".
</details>
### Try out in Ollama!
```
ollama run hf.co/kenhktsui/qwen2.5-7b-instruct-thinking-sft-merged-gguf
```
# Uploaded model
- **Developed by:** kenhktsui
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
aflah/llama-3-8b-Instruct-bnb-4bit__Climate-Science-Epochs-1__Merge-to-4-bit | aflah | "2024-06-19T10:26:31Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-19T10:24:06Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** aflah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SamirXR/NyX-Roleplay-7b | SamirXR | "2024-02-10T11:05:50Z" | 5 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"chat",
"en",
"dataset:SamirXR/NyX-Roleplay",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-10T08:36:40Z" | ---
license: apache-2.0
datasets:
- SamirXR/NyX-Roleplay
language:
- en
tags:
- chat
---

# NyX Roleplay
A 7 Billion Parameter Model (Llama) Fine-tuned by NyX AI for Roleplay Trained on NyX's Roleplay Dataset
## Usage
```py
pip install transformers accelerate
```
```py
from transformers import AutoTokenizer
import transformers
import torch
model = "SamirXR/NyX-Roleplay-7b"
prompt = "Heyy! *User Blushes and Looks at NyX With Shyness*"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Usecase
Utilized for roleplay, with the model assuming the character of 'NyX' a female Persona
## Contact Me
Instagram : [Samir.Xr](https://instagram.com/samir.xr) <br>
Github : [SamirXr](https://github.com/SamirXR) <br>
Discord : [NyX AI](https://discord.com)
|
AntboyAi011/AntboyAi | AntboyAi011 | "2025-02-19T02:21:15Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-19T02:21:15Z" | ---
license: apache-2.0
---
|
LarryAIDraw/zeta_ver1_0 | LarryAIDraw | "2023-10-04T01:24:17Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-10-03T01:10:31Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/155516/zeta-or-kage-no-jitsuryokusha-ni-naritakute |
uumlaut/ddpm-vangogh | uumlaut | "2023-01-06T17:53:48Z" | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2023-01-06T15:31:48Z" | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-vangogh
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/uumlaut/ddpm-vangogh/tensorboard?#scalars)
|
RichardErkhov/diffusionfamily_-_diffullama-8bits | RichardErkhov | "2025-03-24T02:07:59Z" | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2410.17891",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-24T02:02:28Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
diffullama - bnb 8bits
- Model creator: https://huggingface.co/diffusionfamily/
- Original model: https://huggingface.co/diffusionfamily/diffullama/
Original model description:
---
library_name: transformers
base_model:
- meta-llama/Llama-2-7b-hf
tags:
- llama-factory
- full
- diffusion
model-index:
- name: diffullama
results: []
license: apache-2.0
datasets:
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diffullama
This model is a fine-tuned version of [llama2].
## Model description
Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA).
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
```
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
```
|
LnL-AI/Yi-1.5-34B-4bit-gptq | LnL-AI | "2024-05-14T07:49:35Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-13T16:24:28Z" | ---
license: unknown
---
### Quantizing Config:
```json
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.005,
"desc_act": false,
"static_groups": false,
"sym": false,
"true_sequential": true,
"model_name_or_path": "",
"model_file_base_name": "model",
"quant_method": "gptq",
"checkpoint_format": "gptq",
"meta": {
"quantizer": "autogptq:0.8.0.dev1"
}
}
``` |
bowilleatyou/2a7088df-6794-40b7-8d72-74ffb61730b2 | bowilleatyou | "2025-04-03T13:03:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-03T12:05:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
shuyuej/Mistral-7B-Instruct-v0.3-GPTQ | shuyuej | "2024-07-25T02:10:05Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-23T20:23:34Z" | ---
license: apache-2.0
---
# The Quantized Mistral 7B Instruct v0.3 Model
Original Base Model: `mistralai/Mistral-7B-Instruct-v0.3`.<br>
Link: [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
## Quantization Configurations
```
"quantization_config": {
"batch_size": 1,
"bits": 4,
"block_name_to_quantize": null,
"cache_block_outputs": true,
"damp_percent": 0.1,
"dataset": null,
"desc_act": false,
"exllama_config": {
"version": 1
},
"group_size": 128,
"max_input_length": null,
"model_seqlen": null,
"module_name_preceding_first_block": null,
"modules_in_block_to_quantize": null,
"pad_token_id": null,
"quant_method": "gptq",
"sym": true,
"tokenizer": null,
"true_sequential": true,
"use_cuda_fp16": false,
"use_exllama": true
},
```
## Source Codes
Source Codes: [https://github.com/vkola-lab/medpodgpt/tree/main/quantization](https://github.com/vkola-lab/medpodgpt/tree/main/quantization).
|
mjpsm/Togo | mjpsm | "2024-12-31T00:17:17Z" | 171 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:mjpsm/Togo-Dataset",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-12-25T18:06:55Z" | ---
library_name: transformers
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: Togo
results: []
datasets:
- mjpsm/Togo-Dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Togo
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 0.0038 |
| No log | 2.0 | 4 | 0.0003 |
| No log | 3.0 | 6 | 0.0003 |
| No log | 4.0 | 8 | 0.0002 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1
- Datasets 3.1.0
- Tokenizers 0.20.0 |
mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF | mradermacher | "2025-03-06T13:18:58Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Yuuta208/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear",
"base_model:quantized:Yuuta208/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-06T12:41:45Z" | ---
base_model: Yuuta208/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Yuuta208/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear-GGUF/resolve/main/Hermes-3-Llama-3.1-8B-Dolphin3.0-Llama3.1-8B-Merged-linear.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mustain/finetuned_cyberagent_squad_4bit | Mustain | "2023-10-20T08:19:44Z" | 5 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:cyberagent/open-calm-7b",
"base_model:adapter:cyberagent/open-calm-7b",
"region:us"
] | null | "2023-10-20T08:08:26Z" | ---
library_name: peft
base_model: cyberagent/open-calm-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
YakovElm/Apache5Classic_Balance_DATA_ratio_4 | YakovElm | "2023-05-30T16:54:26Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-30T16:53:22Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic_Balance_DATA_ratio_4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic_Balance_DATA_ratio_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4243
- Train Accuracy: 0.8162
- Validation Loss: 0.4969
- Validation Accuracy: 0.8223
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5149 | 0.7844 | 0.4510 | 0.8200 | 0 |
| 0.4849 | 0.7976 | 0.4359 | 0.8326 | 1 |
| 0.4243 | 0.8162 | 0.4969 | 0.8223 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
stablediffusionapi/realistic-vision-v6.0-b1-inpaint-n | stablediffusionapi | "2024-04-25T13:00:04Z" | 86 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-25T12:58:40Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v6.0-b1-inpaint-n"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint-n)
Model link: [View model](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint-n)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v6.0-b1-inpaint-n",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
mradermacher/stackexchange_webapps-GGUF | mradermacher | "2024-12-30T07:13:35Z" | 22 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:mlfoundations-dev/stackexchange_webapps",
"base_model:quantized:mlfoundations-dev/stackexchange_webapps",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-30T07:05:15Z" | ---
base_model: mlfoundations-dev/stackexchange_webapps
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/stackexchange_webapps
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/stackexchange_webapps-GGUF/resolve/main/stackexchange_webapps.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PrunaAI/HuggingFaceTB-SmolLM2-1.7B-Instruct-HQQ-4bit-smashed | PrunaAI | "2025-03-29T03:33:34Z" | 9 | 0 | null | [
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | "2024-12-25T14:59:02Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/HuggingFaceTB-SmolLM2-1.7B-Instruct-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/HuggingFaceTB-SmolLM2-1.7B-Instruct-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Bvdlaan/bvdlaan | Bvdlaan | "2025-02-16T16:01:00Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-16T15:51:18Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BVDLAAN
---
# Bvdlaan
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BVDLAAN` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bvdlaan/bvdlaan', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso06/50db49f7-6016-4a27-87ea-19bb673b3853 | lesso06 | "2025-02-22T17:44:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T17:29:07Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 50db49f7-6016-4a27-87ea-19bb673b3853
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ca06f82b9a0d5c20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ca06f82b9a0d5c20_train_data.json
type:
field_instruction: s3_key
field_output: default_caption
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso06/50db49f7-6016-4a27-87ea-19bb673b3853
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000206
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/ca06f82b9a0d5c20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 60
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5bed903-25dd-42bf-bd3a-42616d0040a1
wandb_project: 06a
wandb_run: your_name
wandb_runid: d5bed903-25dd-42bf-bd3a-42616d0040a1
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 50db49f7-6016-4a27-87ea-19bb673b3853
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000206
- train_batch_size: 4
- eval_batch_size: 4
- seed: 60
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.0734 |
| 0.0316 | 0.0261 | 50 | 0.0193 |
| 0.0336 | 0.0522 | 100 | 0.0062 |
| 0.0068 | 0.0783 | 150 | 0.0124 |
| 0.0554 | 0.1044 | 200 | 0.1124 |
| 0.0015 | 0.1305 | 250 | 0.0066 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jsfs11/MoEv4Config-TIESwithRescale-7b | jsfs11 | "2024-04-22T04:00:12Z" | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuTrixOmniBe-7B-model-remix",
"PetroGPT/WestSeverus-7B-DPO",
"vanillaOVO/supermario_v4",
"base_model:Kukedlc/NeuTrixOmniBe-7B-model-remix",
"base_model:merge:Kukedlc/NeuTrixOmniBe-7B-model-remix",
"base_model:PetroGPT/WestSeverus-7B-DPO",
"base_model:merge:PetroGPT/WestSeverus-7B-DPO",
"base_model:vanillaOVO/supermario_v4",
"base_model:merge:vanillaOVO/supermario_v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T03:52:52Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- PetroGPT/WestSeverus-7B-DPO
- vanillaOVO/supermario_v4
base_model:
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- PetroGPT/WestSeverus-7B-DPO
- vanillaOVO/supermario_v4
---
# MoEv4Config-TIESwithRescale-7b
MoEv4Config-TIESwithRescale-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix)
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuTrixOmniBe-7B-model-remix
# No parameters necessary for base model
- model: Kukedlc/NeuTrixOmniBe-7B-model-remix
parameters:
density: [1, 0.7, 0.1]
weight: [0, 0.3, 0.7, 1]
- model: PetroGPT/WestSeverus-7B-DPO
parameters:
density: [1, 0.7, 0.3]
weight: [0, 0.25, 0.5, 1]
- model: vanillaOVO/supermario_v4
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
parameters:
int8_mask: true
normalize: true
rescale: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/MoEv4Config-TIESwithRescale-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
albertus-sussex/veriscrape-fixed-simcse-university-reference_3_to_verify_7-fold-3 | albertus-sussex | "2025-04-04T18:01:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-04T18:01:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blueapple8259/TinyKo-V3 | blueapple8259 | "2023-12-23T12:02:47Z" | 64 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:mc4",
"dataset:Bingsu/ko_alpaca_data",
"dataset:beomi/KoAlpaca-v1.1a",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-23T11:52:56Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- mc4
- Bingsu/ko_alpaca_data
- beomi/KoAlpaca-v1.1a
language:
- ko
pipeline_tag: text-generation
---
[mc4](https://huggingface.co/datasets/mc4)에서 한글 0~29까지 데이터로 사전학습 한 뒤에 [Bingsu/ko_alpaca_data](https://huggingface.co/datasets/Bingsu/ko_alpaca_data), [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)로 lora파인튜닝 한 모델입니다.
데이터셋에서 마스킹 및 정제 작업을 거치지 않았기 때문에 민감한 정보를 출력할 수 있으니 주의하시기 바랍니다.
|
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.4-steps-200 | trl-lib | "2023-12-20T14:43:05Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2023-12-20T14:42:37Z" | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.4-steps-200
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Lukee4/biogpt-2019_2labels | Lukee4 | "2023-08-06T10:14:04Z" | 4 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-06T09:43:28Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
bear7011/data_test | bear7011 | "2024-09-30T06:06:40Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-30T05:08:40Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WforGodot/add-lora-1b | WforGodot | "2023-07-28T13:24:40Z" | 3 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-28T13:10:39Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ybelkada/custom-images | ybelkada | "2023-04-04T13:53:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-04-04T13:51:32Z" | A collection of custom images that I use for blogposts, etc. |
tceron/sentence-transformers-party-similarity-by-domain | tceron | "2022-10-17T10:28:12Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | "2022-10-17T09:09:54Z" | ---
license: cc-by-4.0
---
More information about the model [in this git repo](https://github.com/tceron/capture_similarity_between_political_parties) |
z-uo/bert-italian-ner-onnx-quantized-avx512 | z-uo | "2024-02-18T13:59:11Z" | 5 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"token-classification",
"it",
"base_model:nickprock/bert-italian-finetuned-ner",
"base_model:quantized:nickprock/bert-italian-finetuned-ner",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-18T13:39:10Z" | ---
language:
- it
license: mit
widget:
- text: 'Ciao, sono Giacomo. Vivo a Milano e lavoro da Armani. '
example_title: Example 1
- text: 'Domenica andrò allo stadio con Giovanna a guardare la Fiorentina. '
example_title: Example 2
base_model: nickprock/bert-italian-finetuned-ner
pipeline_tag: token-classification
---
# Bert Italian NER ONNX avx512
This model is the onnx version of nickprock/bert-italian-finetuned-ner.
To use you need to intall following libraries:
```bash
pip install optimum onnxruntime onnx
```
And run with the following script:
```python
import time
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("z-uo/bert-italian-ner-onnx-quantized-avx512")
model = ORTModelForTokenClassification.from_pretrained("z-uo/bert-italian-ner-onnx-quantized-avx512")
nerpipeline = pipeline('ner', model=model, tokenizer=tokenizer)
text = "La sede storica della Olivetti è ad Ivrea"
output = nerpipeline(text)
``` |
GioReg/dbmdzHateSpeech | GioReg | "2022-05-23T17:02:37Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-05-23T16:33:15Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dbmdzHateSpeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbmdzHateSpeech
This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7919
- Accuracy: 0.706
- F1: 0.3524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
OwOpeepeepoopoo/NoSoup4U11 | OwOpeepeepoopoo | "2024-05-25T05:00:54Z" | 135 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-24T05:05:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidschulte/ESM_nguha__legalbench_diversity_4 | davidschulte | "2025-03-28T12:12:02Z" | 23 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:nguha/legalbench",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T11:14:09Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- nguha/legalbench
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM nguha/legalbench
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** nguha/legalbench
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** nguha/legalbench
- **Subset [optional]:** diversity_4
- **Text Column:** text
- **Label Column:** aic_is_met
- **Dataset Split:** train
- **Sample size [optional]:** 6
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
ChaoticNeutrals/Eris_PrimeV4.1-Remix-7B | ChaoticNeutrals | "2024-04-08T05:37:27Z" | 14 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ChaoticNeutrals/Nyan-Stunna-7B",
"base_model:finetune:ChaoticNeutrals/Nyan-Stunna-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T05:20:27Z" | ---
base_model:
- Nitral-AI/Eris_PrimeV4-Remix-7B
- Nitral-AI/Nyan-Stunna-7B
library_name: transformers
tags:
- mergekit
- merge
---
|
dvyio/flux-lora-art-nouveau | dvyio | "2024-09-11T15:21:35Z" | 88 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-11T15:21:29Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
Marilyn Monroe, poster with the text "MARILYN", visual flourishes,
illustration in the style of ARTNV
output:
url: images/o9kfur5MGx38l-gkDuebO_f79909e8a9304ef089ed06809584dd2e.jpg
- text: >-
Tower Bridge, poster with the text "LONDON" at the top and "TOWER BRIDGE" at
the bottom, visual flourishes, illustration in the style of ARTNV
output:
url: images/lEWi-z4A6rNfLB7WUmeOQ_a77571046a1b4277a976dcd63791bada.jpg
- text: a man, visual flourishes, illustration in the style of ARTNV
output:
url: images/JvbD1DHwgCusvu22O1nMG_ea641a6e0b95424b818287ea75a60b71.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: illustration in the style of ARTNV
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE
---
# Art Nouveau
<Gallery />
## Model description
Trained using [fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
## Trigger words
You should use `illustration in the style of ARTNV` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/dvyio/flux-lora-art-nouveau/tree/main) them in the Files & versions tab.
|
dangvansam/viet-tts | dangvansam | "2024-12-11T11:22:25Z" | 99 | 4 | null | [
"onnx",
"tts",
"text-to-speech",
"vietnamese",
"speech-synthesis",
"speech,",
"viet-tts",
"viettts",
"vi",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2024-10-23T10:13:15Z" | ---
language:
- vi
- en
pipeline_tag: text-to-speech
license: apache-2.0
tags:
- tts
- text-to-speech
- vietnamese
- speech-synthesis
- speech,
- viet-tts
- viettts
---
<!-- # VietTTS: An Open-Source Vietnamese Text to Speech -->
<p align="center">
<img src="https://github.com/dangvansam/viet-tts/blob/main/assets/viet-tts-medium.png?raw=true" style="width: 200px">
<h1 align="center"style="color: white; font-weight: bold; font-family:roboto"><span style="color: white; font-weight: bold; font-family:roboto">VietTTS</span>: An Open-Source Vietnamese Text to Speech</h1>
</p>
<p align="center">
<a href="https://github.com/dangvansam/viet-tts"><img src="https://img.shields.io/github/stars/dangvansam/viet-tts?style=social"></a>
<a href="LICENSE"><img src="https://img.shields.io/github/license/dangvansam/viet-asr"></a>
<a href="https://huggingface.co/dangvansam/viet-tts/blob/main/README_VN.md"><img src="https://img.shields.io/badge/README-Tiếng Việt-blue"></a>
</p>
**VietTTS** is an open-source toolkit providing the community with a powerful Vietnamese TTS model, capable of natural voice synthesis and robust voice cloning. Designed for effective experimentation, **VietTTS** supports research and application in Vietnamese voice technologies.
## ⭐ Key Features
- **TTS**: Text-to-Speech generation with any voice via prompt audio
- **OpenAI-API-compatible**: Compatible with OpenAI's Text-to-Speech API format
## 🛠️ Installation
VietTTS can be installed via a Python installer (Linux only, with Windows and macOS support coming soon) or Docker.
### Python Installer (Python>=3.10)
```bash
git clone https://github.com/dangvansam/viet-tts.git
cd viet-tts
# (Optional) Install Python environment with conda, you could also use virtualenv
conda create --name viettts python=3.10
conda activate viettts
# Install
pip install -e . && pip cache purge
```
### Docker
1. Install [Docker](https://docs.docker.com/get-docker/), [NVIDIA Driver](https://www.nvidia.com/download/index.aspx), [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html), and [CUDA](https://developer.nvidia.com/cuda-downloads).
2. Run the following commands:
```bash
git clone https://github.com/dangvansam/viet-tts.git
cd viet-tts
# Build docker images
docker compose build
# Run with docker-compose - will create server at: http://localhost:8298
docker compose up -d
# Or run with docker run - will create server at: http://localhost:8298
docker run -itd --gpu=alls -p 8298:8298 -v ./pretrained-models:/app/pretrained-models -n viet-tts-service viet-tts:latest viettts server --host 0.0.0.0 --port 8298
```
## 🚀 Usage
### Built-in Voices 🤠
You can use available voices bellow to synthesize speech.
<details>
<summary>Expand</summary>
| ID | Voice | Gender | Play Audio |
|-----|-----------------------|--------|--------------------------------------------------|
| 1 | nsnd-le-chuc | 👨 | <audio controls src="samples/nsnd-le-chuc.mp3"></audio> |
| 2 | speechify_10 | 👩 | <audio controls src="samples/speechify_10.wav"></audio> |
| 3 | atuan | 👨 | <audio controls src="samples/atuan.wav"></audio> |
| 4 | speechify_11 | 👩 | <audio controls src="samples/speechify_11.wav"></audio> |
| 5 | cdteam | 👨 | <audio controls src="samples/cdteam.wav"></audio> |
| 6 | speechify_12 | 👩 | <audio controls src="samples/speechify_12.wav"></audio> |
| 7 | cross_lingual_prompt | 👩 | <audio controls src="samples/cross_lingual_prompt.wav"></audio> |
| 8 | speechify_2 | 👩 | <audio controls src="samples/speechify_2.wav"></audio> |
| 9 | diep-chi | 👨 | <audio controls src="samples/diep-chi.wav"></audio> |
| 10 | speechify_3 | 👩 | <audio controls src="samples/speechify_3.wav"></audio> |
| 11 | doremon | 👨 | <audio controls src="samples/doremon.mp3"></audio> |
| 12 | speechify_4 | 👩 | <audio controls src="samples/speechify_4.wav"></audio> |
| 13 | jack-sparrow | 👨 | <audio controls src="samples/jack-sparrow.mp3"></audio> |
| 14 | speechify_5 | 👩 | <audio controls src="samples/speechify_5.wav"></audio> |
| 15 | nguyen-ngoc-ngan | 👩 | <audio controls src="samples/nguyen-ngoc-ngan.wav"></audio> |
| 16 | speechify_6 | 👩 | <audio controls src="samples/speechify_6.wav"></audio> |
| 17 | nu-nhe-nhang | 👩 | <audio controls src="samples/nu-nhe-nhang.wav"></audio> |
| 18 | speechify_7 | 👩 | <audio controls src="samples/speechify_7.wav"></audio> |
| 19 | quynh | 👩 | <audio controls src="samples/quynh.wav"></audio> |
| 20 | speechify_8 | 👩 | <audio controls src="samples/speechify_8.wav"></audio> |
| 21 | speechify_9 | 👩 | <audio controls src="samples/speechify_9.wav"></audio> |
| 22 | son-tung-mtp | 👨 | <audio controls src="samples/son-tung-mtp.wav"></audio> |
| 23 | zero_shot_prompt | 👩 | <audio controls src="samples/zero_shot_prompt.wav"></audio> |
| 24 | speechify_1 | 👩 | <audio controls src="samples/speechify_1.wav"></audio> |
<div>
</div>
</details>
### Command Line Interface (CLI)
The VietTTS Command Line Interface (CLI) allows you to quickly generate speech directly from the terminal. Here's how to use it:
```bash
# Usage
viettts --help
# Start API Server
viettts server --host 0.0.0.0 --port 8298
# List all built-in voices
viettts show-voices
# Synthesize speech from text with built-in voices
viettts synthesis --text "Xin chào" --voice 0 --output test.wav
# Clone voice from a local audio file
viettts synthesis --text "Xin chào" --voice Download/voice.wav --output cloned.wav
```
### API Client
#### Python (OpenAI Client)
You need to set environment variables for the OpenAI Client:
```bash
# Set base_url and API key as environment variables
export OPENAI_BASE_URL=http://localhost:8298
export OPENAI_API_KEY=viet-tts # not use in current version
```
To create speech from input text:
```python
from pathlib import Path
from openai import OpenAI
client = OpenAI()
output_file_path = Path(__file__).parent / "speech.wav"
with client.audio.speech.with_streaming_response.create(
model='tts-1',
voice='cdteam',
input='Xin chào Việt Nam.',
speed=1.0,
response_format='wav'
) as response:
response.stream_to_file('a.wav')
```
#### CURL
```bash
# Get all built-in voices
curl --location http://0.0.0.0:8298/v1/voices
# OpenAI format (bult-in voices)
curl http://localhost:8298/v1/audio/speech \
-H "Authorization: Bearer viet-tts" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "Xin chào Việt Nam.",
"voice": "son-tung-mtp"
}' \
--output speech.wav
# API with voice from local file
curl --location http://0.0.0.0:8298/v1/tts \
--form 'text="xin chào"' \
--form 'audio_file=@"/home/viettts/Downloads/voice.mp4"' \
--output speech.wav
```
#### Node
```js
import fs from "fs";
import path from "path";
import OpenAI from "openai";
const openai = new OpenAI();
const speechFile = path.resolve("./speech.wav");
async function main() {
const mp3 = await openai.audio.speech.create({
model: "tts-1",
voice: "1",
input: "Xin chào Việt Nam.",
});
console.log(speechFile);
const buffer = Buffer.from(await mp3.arrayBuffer());
await fs.promises.writeFile(speechFile, buffer);
}
main();
```
## 🙏 Acknowledgement
- 💡 Borrowed code from [Cosyvoice](https://github.com/FunAudioLLM/CosyVoice)
- 🎙️ VAD model from [silero-vad](https://github.com/snakers4/silero-vad)
- 📝 Text normalization with [Vinorm](https://github.com/v-nhandt21/Vinorm)
## 📜 License
The **VietTTS** source code is released under the **Apache 2.0 License**. Pre-trained models and audio samples are licensed under the **CC BY-NC License**, based on an in-the-wild dataset. We apologize for any inconvenience this may cause.
## ⚠️ Disclaimer
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
## 💬 Contact
- Facebook: https://fb.com/sam.rngd
- GitHub: https://github.com/dangvansam
- Email: [email protected] |
pfunk/Pong-v4-DQPN_p10-seed1 | pfunk | "2023-02-09T05:25:11Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-09T05:24:46Z" | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 3.10 +/- 6.20
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10 --start-policy-f 10000 --end-policy-f 10000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 10000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
arjunanand13/florence-enphaseall2-5e | arjunanand13 | "2024-10-13T09:25:58Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-10-13T05:11:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mylamoore040/Myla | Mylamoore040 | "2025-02-20T19:45:54Z" | 0 | 0 | diffusers | [
"diffusers",
"translation",
"en",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:cognitivecomputations/dolphin-r1",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:finetune:deepseek-ai/DeepSeek-R1",
"license:bigcode-openrail-m",
"region:us"
] | translation | "2025-02-20T19:42:51Z" | ---
license: bigcode-openrail-m
datasets:
- open-thoughts/OpenThoughts-114k
- open-r1/OpenR1-Math-220k
- cognitivecomputations/dolphin-r1
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
pipeline_tag: translation
library_name: diffusers
--- |
RachidAR/Llama-3-8B-Instruct-Physics-5k-Scar-Q6_K-GGUF | RachidAR | "2024-04-23T08:51:07Z" | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-23T08:50:49Z" | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# RachidAR/Llama-3-8B-Instruct-Physics-5k-Scar-Q6_K-GGUF
This model was converted to GGUF format from [`nmdr/Llama-3-8B-Instruct-Physics-5k-Scar`](https://huggingface.co/nmdr/Llama-3-8B-Instruct-Physics-5k-Scar) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nmdr/Llama-3-8B-Instruct-Physics-5k-Scar) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo RachidAR/Llama-3-8B-Instruct-Physics-5k-Scar-Q6_K-GGUF --model llama-3-8b-instruct-physics-5k-scar.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo RachidAR/Llama-3-8B-Instruct-Physics-5k-Scar-Q6_K-GGUF --model llama-3-8b-instruct-physics-5k-scar.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-instruct-physics-5k-scar.Q6_K.gguf -n 128
```
|
skarsa/annomatic_topic_subsamples_model_alpha_0_005_idx_2 | skarsa | "2025-02-11T13:29:52Z" | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T16:36:39Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: annomatic_topic_subsamples_model_alpha_0_005_idx_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annomatic_topic_subsamples_model_alpha_0_005_idx_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
climatebert/distilroberta-base-climate-d-s | climatebert | "2023-05-04T13:05:02Z" | 135 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"climate",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: en
license: apache-2.0
tags:
- climate
---
# Model Card for distilroberta-base-climate-d-s
## Model Description
This is the ClimateBERT language model based on the DIV-SELECT and SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
## Climate performance model card
| distilroberta-base-climate-d-s | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
``` |
annaeze/lab9_1 | annaeze | "2022-04-15T12:44:42Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-04-14T13:43:01Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: annaeze/lab9_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# annaeze/lab9_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0230
- Validation Loss: 0.0572
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1174 | 0.0596 | 0 |
| 0.0391 | 0.0529 | 1 |
| 0.0230 | 0.0572 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lAWYERSOFT/a2chatski1.0 | lAWYERSOFT | "2024-06-08T14:42:59Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"ru",
"dataset:HuggingFaceFW/fineweb",
"license:bigcode-openrail-m",
"region:us"
] | text-classification | "2024-06-08T14:41:50Z" | ---
license: bigcode-openrail-m
datasets:
- HuggingFaceFW/fineweb
language:
- ru
metrics:
- accuracy
- bleurt
library_name: adapter-transformers
pipeline_tag: text-classification
--- |
jtatman/phi-3-mini-4k-chem-physics-lora | jtatman | "2024-06-13T02:36:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T00:23:03Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** jtatman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | "2023-10-17T23:14:41Z" | 0 | 0 | flair | [
"flair",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-13T10:45:38Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmBERT as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 |
| bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 |
| bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 |
| bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
oldiday/6d3dbca3-9f7a-47cc-bf2b-249c9516a416 | oldiday | "2025-02-06T20:47:58Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | "2025-02-06T20:40:35Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d3dbca3-9f7a-47cc-bf2b-249c9516a416
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f9948a4a3f466561_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f9948a4a3f466561_train_data.json
type:
field_input: ''
field_instruction: text
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: oldiday/6d3dbca3-9f7a-47cc-bf2b-249c9516a416
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 8
mlflow_experiment_name: /tmp/f9948a4a3f466561_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 328a2c4f-a985-47de-bc39-6011998bed6b
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: 328a2c4f-a985-47de-bc39-6011998bed6b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6d3dbca3-9f7a-47cc-bf2b-249c9516a416
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.3947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 12.4589 |
| 12.4573 | 0.0754 | 50 | 12.4567 |
| 12.4378 | 0.1508 | 100 | 12.4328 |
| 12.4031 | 0.2262 | 150 | 12.4036 |
| 12.3949 | 0.3015 | 200 | 12.3974 |
| 12.3911 | 0.3769 | 250 | 12.3954 |
| 12.3935 | 0.4523 | 300 | 12.3950 |
| 12.3922 | 0.5277 | 350 | 12.3948 |
| 12.3917 | 0.6031 | 400 | 12.3948 |
| 12.3919 | 0.6785 | 450 | 12.3947 |
| 12.3926 | 0.7539 | 500 | 12.3947 |
| 12.3921 | 0.8292 | 550 | 12.3947 |
| 12.3935 | 0.9046 | 600 | 12.3947 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Carick/FacebookAI-roberta-base-fine-tuned-term-typing | Carick | "2024-11-13T07:26:27Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-11T15:57:19Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
model-index:
- name: FacebookAI-roberta-base-fine-tuned-term-typing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FacebookAI-roberta-base-fine-tuned-term-typing
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2362 | 1.0 | 2535 | 0.1437 |
| 0.2113 | 2.0 | 5070 | 0.0809 |
| 0.1617 | 3.0 | 7605 | 0.0663 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
AlignmentResearch/robust_llm_pythia-31m_mz-131f_PasswordMatch | AlignmentResearch | "2024-04-26T10:47:10Z" | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"base_model:finetune:EleutherAI/pythia-31m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-26T10:47:01Z" | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-31m
model-index:
- name: robust_llm_pythia-31m_mz-131f_PasswordMatch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-131f_PasswordMatch
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mrm8488/santacoder-finetuned-the-stack-rust | mrm8488 | "2023-02-11T19:45:40Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-02-09T18:52:17Z" | ---
license: openrail
tags:
- generated_from_trainer
model-index:
- name: santacoder-finetuned-the-stack-rust
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-rust
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2075 | 0.05 | 500 | 1.0610 |
| 1.79 | 0.1 | 1000 | 1.0754 |
| 1.2441 | 0.15 | 1500 | 1.0339 |
| 1.1709 | 0.2 | 2000 | 0.9829 |
| 0.7645 | 0.25 | 2500 | 0.9738 |
| 1.0381 | 0.3 | 3000 | 0.9536 |
| 1.0625 | 0.35 | 3500 | 0.9268 |
| 0.78 | 0.4 | 4000 | 0.9130 |
| 0.9294 | 0.45 | 4500 | 0.9001 |
| 0.9767 | 0.5 | 5000 | 0.8857 |
| 5.7027 | 0.55 | 5500 | 0.8728 |
| 0.9476 | 0.6 | 6000 | 0.8556 |
| 0.6185 | 0.65 | 6500 | 0.8404 |
| 0.5057 | 0.7 | 7000 | 0.8328 |
| 0.6451 | 0.75 | 7500 | 0.8199 |
| 0.8298 | 0.8 | 8000 | 0.8111 |
| 0.2447 | 0.85 | 8500 | 0.8069 |
| 0.8177 | 0.9 | 9000 | 0.8020 |
| 0.7184 | 0.95 | 9500 | 0.8003 |
| 0.9166 | 1.0 | 10000 | 0.7999 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Helsinki-NLP/opus-mt-tpi-sv | Helsinki-NLP | "2023-08-16T12:07:19Z" | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tpi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tpi-sv
* source languages: tpi
* target languages: sv
* OPUS readme: [tpi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tpi.sv | 21.6 | 0.396 |
|
sd-concepts-library/renalla | sd-concepts-library | "2022-09-15T09:23:43Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-09-15T09:23:40Z" | ---
license: mit
---
### Renalla on Stable Diffusion
This is the `Renalla` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
tensorblock/stackexchange_literature-GGUF | tensorblock | "2025-01-01T06:16:15Z" | 1,266 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:mlfoundations-dev/stackexchange_literature",
"base_model:quantized:mlfoundations-dev/stackexchange_literature",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-01T05:32:43Z" | ---
library_name: transformers
license: llama3.1
base_model: mlfoundations-dev/stackexchange_literature
tags:
- llama-factory
- full
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: stackexchange_literature
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mlfoundations-dev/stackexchange_literature - GGUF
This repo contains GGUF format model files for [mlfoundations-dev/stackexchange_literature](https://huggingface.co/mlfoundations-dev/stackexchange_literature).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [stackexchange_literature-Q2_K.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [stackexchange_literature-Q3_K_S.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [stackexchange_literature-Q3_K_M.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [stackexchange_literature-Q3_K_L.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [stackexchange_literature-Q4_0.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stackexchange_literature-Q4_K_S.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [stackexchange_literature-Q4_K_M.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [stackexchange_literature-Q5_0.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stackexchange_literature-Q5_K_S.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [stackexchange_literature-Q5_K_M.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [stackexchange_literature-Q6_K.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [stackexchange_literature-Q8_0.gguf](https://huggingface.co/tensorblock/stackexchange_literature-GGUF/blob/main/stackexchange_literature-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/stackexchange_literature-GGUF --include "stackexchange_literature-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/stackexchange_literature-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Tristan/fasttext-410m-finetune-correct | Tristan | "2025-03-27T23:20:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-27T23:19:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AndyVampiro/fog | AndyVampiro | "2024-12-17T13:53:10Z" | 123 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-17T13:10:54Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: thefog
---
# Fog
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `thefog` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AndyVampiro/fog', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tejas-vaia/ft_test_llama_3_2_07_12_2024 | tejas-vaia | "2024-12-07T14:56:50Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-07T14:54:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ManishThota/openchat_3.5-finetuned | ManishThota | "2024-04-01T18:05:34Z" | 0 | 0 | transformers | [
"transformers",
"openchat",
"mistral",
"C-RLFT",
"text-generation",
"conversational",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:imone/OpenOrca_FLAN",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"dataset:tiedong/goat",
"dataset:glaiveai/glaive-code-assistant",
"dataset:meta-math/MetaMathQA",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:TIGER-Lab/MathInstruct",
"arxiv:2309.11235",
"arxiv:2303.08774",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T18:02:38Z" | ---
license: apache-2.0
tags:
- openchat
- mistral
- C-RLFT
datasets:
- openchat/openchat_sharegpt4_dataset
- imone/OpenOrca_FLAN
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
- tiedong/goat
- glaiveai/glaive-code-assistant
- meta-math/MetaMathQA
- OpenAssistant/oasst_top1_2023-08-25
- TIGER-Lab/MathInstruct
library_name: transformers
pipeline_tag: text-generation
---
# OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
</div>
<p align="center">
<a href="https://github.com/imoneoi/openchat">GitHub Repo</a> •
<a href="https://openchat.team">Online Demo</a> •
<a href="https://discord.gg/pQjnXvNKHY">Discord</a> •
<a href="https://twitter.com/imonenext">Twitter</a> •
<a href="https://huggingface.co/openchat">Huggingface</a> •
<a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a>
</p>
**🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥**
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
<div align="center" style="justify-content: center; align-items: center; "'>
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/3.5-benchmarks.png?raw=true" style="width: 100%; border-radius: 0.5em">
</div>
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
[](https://zenodo.org/badge/latestdoi/645397533)
## Usage
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
<details>
<summary>Example request (click to expand)</summary>
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
Coding Mode
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"condition": "Code",
"messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}]
}'
```
</details>
| Model | Size | Context | Weights | Serving |
|--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` |
For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below.
<details>
<summary>Conversation templates (click to expand)</summary>
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
</details>
The GPT4 template is also available as the integrated `tokenizer.chat_template`,
which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
```
## Comparison with [X.AI Grok models](https://x.ai/)
Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok?
Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡
(Written by OpenChat 3.5, with a touch of humor and wit.)
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|--------------|-------------|---------|----------|------|-----------|----------|----------|
| OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** |
| Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
| Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
## <a id="benchmarks"></a> Benchmarks
| Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K |
|--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------|
| OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** |
| ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 |
| | | | | | | | | | | |
| OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 |
| OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 |
| Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 |
| Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 |
| Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 |
| | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B |
*: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
## Limitations
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
## License
Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
## Dataset Details
OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-quality instruction data, with a custom processing pipeline. We detail some notable subsets included here:
- [OpenChat ShareGPT](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset)
- [Open-Orca with FLAN answers](https://huggingface.co/datasets/imone/OpenOrca_FLAN)
- Capybara [1](https://huggingface.co/datasets/LDJnr/Pure-Dove) [2](https://huggingface.co/datasets/LDJnr/Verified-Camel) [3](https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct)
- [GOAT](https://huggingface.co/datasets/tiedong/goat)
- [Glaive](https://huggingface.co/datasets/glaiveai/glaive-code-assistant)
- [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
- [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [OpenAssistant](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25)
## Citation
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
```
## 💌 Main Contributor
* Wang Guan [[email protected]], Cheng Sijie [[email protected]], LDJ
* We look forward to hearing you and collaborating on this exciting project! |
divinitas-jyi/q-FrozenLake-v1-4x4-noSlippery | divinitas-jyi | "2024-01-09T16:37:16Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-09T16:23:41Z" | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.17 +/- 0.38
name: mean_reward
verified: false
---
|
alyzbane/2025-02-05-15-01-55-swin-base-patch4-window7-224 | alyzbane | "2025-02-05T15:29:08Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-02-05T15:28:44Z" | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-base-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 2025-02-05-15-01-55-swin-base-patch4-window7-224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2025-02-05-15-01-55-swin-base-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0147
- Precision: 0.9953
- Recall: 0.9951
- F1: 0.9951
- Accuracy: 0.9947
- Top1 Accuracy: 0.9951
- Error Rate: 0.0053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3407
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 0.8621 | 1.0 | 103 | 0.2049 | 0.9405 | 0.9220 | 0.9233 | 0.9284 | 0.9220 | 0.0716 |
| 0.3112 | 2.0 | 206 | 0.1944 | 0.9579 | 0.9537 | 0.9511 | 0.9419 | 0.9537 | 0.0581 |
| 0.1598 | 3.0 | 309 | 0.1673 | 0.9635 | 0.9610 | 0.9610 | 0.9627 | 0.9610 | 0.0373 |
| 0.1019 | 4.0 | 412 | 0.0472 | 0.9856 | 0.9854 | 0.9853 | 0.9858 | 0.9854 | 0.0142 |
| 0.0779 | 5.0 | 515 | 0.3869 | 0.9388 | 0.9268 | 0.9246 | 0.9236 | 0.9268 | 0.0764 |
| 0.0519 | 6.0 | 618 | 0.0224 | 0.9858 | 0.9854 | 0.9852 | 0.9852 | 0.9854 | 0.0148 |
| 0.0477 | 7.0 | 721 | 0.0402 | 0.9887 | 0.9878 | 0.9879 | 0.9885 | 0.9878 | 0.0115 |
| 0.0086 | 8.0 | 824 | 0.0147 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 |
| 0.0052 | 9.0 | 927 | 0.0177 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 |
| 0.0022 | 10.0 | 1030 | 0.0180 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
Wiam/distilhubert-finetuned-babycry-v5 | Wiam | "2024-10-02T15:46:35Z" | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-10-02T15:24:09Z" | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilhubert-finetuned-babycry-v5
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.782608695652174
- name: F1
type: f1
value: 0.6871686108165429
- name: Precision
type: precision
value: 0.6124763705103969
- name: Recall
type: recall
value: 0.782608695652174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-babycry-v5
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8441
- Accuracy: {'accuracy': 0.782608695652174}
- F1: 0.6872
- Precision: 0.6125
- Recall: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:-------------------------------:|:------:|:---------:|:------:|
| 0.7047 | 1.0870 | 25 | 0.9225 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.6071 | 2.1739 | 50 | 0.9175 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.6525 | 3.2609 | 75 | 0.8866 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.6558 | 4.3478 | 100 | 0.8433 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.5577 | 5.4348 | 125 | 0.8705 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.7055 | 6.5217 | 150 | 0.8323 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
| 0.6092 | 7.6087 | 175 | 0.8440 | {'accuracy': 0.782608695652174} | 0.6872 | 0.6125 | 0.7826 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
welbzeec/food-recommender-falcon-7b | welbzeec | "2023-07-18T05:50:54Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-18T05:01:38Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
domjina/taxi | domjina | "2023-08-02T00:25:19Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-02T00:25:16Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="domjina/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
c00cjz00/gemma-3-finetune | c00cjz00 | "2025-03-15T23:34:42Z" | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-15T23:33:15Z" | ---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** c00cjz00
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
EuropeanParliament/eurovoc_en | EuropeanParliament | "2023-11-17T15:37:01Z" | 0 | 3 | null | [
"pytorch",
"eurovoc",
"text-classification",
"en",
"dataset:EuropeanParliament/cellar_eurovoc",
"arxiv:2010.12871",
"license:eupl-1.1",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-15T10:08:33Z" | ---
license: eupl-1.1
datasets:
- EuropeanParliament/cellar_eurovoc
language:
- en
metrics:
- type: f1
value: 0.72
name: micro F1
args:
threshold: 0.34
- type: NDCG@3
value: 0.84
name: NDCG@5
- type: NDCG@5
value: 0.80
name: NDCG@5
- type: NDCG@10
value: 0.83
name: NDCG@10
tags:
- eurovoc
pipeline_tag: text-classification
widget:
- text: "The Union condemns the continuing grave human rights violations by the Myanmar armed forces, including torture, sexual and gender-based violence, the persecution of civil society actors, human rights defenders and journalists, and attacks on the civilian population, including ethnic and religious minorities."
---
# Eurovoc Multilabel Classifer
[EuroVoc](https://op.europa.eu/fr/web/eu-vocabularies) is a large multidisciplinary multilingual hierarchical thesaurus of more than 7000 classes covering the activities of EU institutions.
Given the number of legal documents produced every day and the huge mass of pre-existing documents to be classified high quality automated or semi-automated classification methods are most welcome in this domain.
This model based on BERT Deep Neural Network was trained on more than 200,000 documents to achieve that task and is used in a production environment via the huggingface inference endpoint.
## Architecture

7331 Eurovoc labels
## Usage
```python
from eurovoc import EurovocTagger
model = EurovocTagger.from_pretrained("EuropeanParliament/eurovoc_en")
```
## Metrics
### Eurlex57k Dataset
| Metric | Value | Threshold Value |
|------------|----------|-----------------|
| Micro F1 | 0.7233 | 0.34 |
| NDCG@3 | 0.8438 | - |
| NDCG@5 | 0.8079 | - |
| NDCG@10 | 0.833 | - |
These values are in line with the state of the art in the field, see the publication [Large Scale Legal Text Classification Using Transformer Models](https://arxiv.org/pdf/2010.12871.pdf).
## Inference Endpoint
Member of the [European Parliament HuggingFace Organisation](https://huggingface.co/EuropeanParliament) can access to our inference endpoint.
### Payload example
```json
{
"inputs": "The Union condemns the continuing grave human rights violations by the Myanmar armed forces, including torture, sexual and gender-based violence, the persecution of civil society actors, human rights defenders and journalists, and attacks on the civilian population, including ethnic and religious minorities. ",
"topk": 10,
"threshold": 0.16
}
```
result:
```json
{'results': [{'label': 'international sanctions', 'score': 0.9994925260543823},
{'label': 'economic sanctions', 'score': 0.9991770386695862},
{'label': 'natural person', 'score': 0.9591936469078064},
{'label': 'EU restrictive measure', 'score': 0.8388392329216003},
{'label': 'legal person', 'score': 0.45630475878715515},
{'label': 'Burma/Myanmar', 'score': 0.43375277519226074}]}
```
Only six results, because the following one score is less that 0.16
Default value, topk = 5 and threshold = 0.16
## Author(s)
Sébastien Campion <[email protected]>
|
openpecha/TTS_st5_test | openpecha | "2024-10-23T05:49:11Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-10-23T05:48:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
draziert/ppo-SnowballTarget | draziert | "2023-07-22T11:35:28Z" | 28 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-07-22T11:33:11Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: draziert/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sophie-Rain-Spider-Man-Video-Viral/Sophie.Rain.Video.Link.Short.Clip.Video.Viral.On.Social.Media.X.Twitter | Sophie-Rain-Spider-Man-Video-Viral | "2025-02-20T18:45:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-20T18:44:42Z" | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
elopezlopez/Bio_ClinicalBERT_fold_10_binary_v1 | elopezlopez | "2022-08-04T11:10:27Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-03T21:03:44Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_10_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_10_binary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5504
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3803 | 0.8103 |
| 0.4005 | 2.0 | 576 | 0.4769 | 0.8070 |
| 0.4005 | 3.0 | 864 | 0.5258 | 0.7955 |
| 0.1889 | 4.0 | 1152 | 0.7423 | 0.8153 |
| 0.1889 | 5.0 | 1440 | 1.1246 | 0.8012 |
| 0.0703 | 6.0 | 1728 | 1.1325 | 0.8039 |
| 0.0246 | 7.0 | 2016 | 1.2192 | 0.8196 |
| 0.0246 | 8.0 | 2304 | 1.3645 | 0.8050 |
| 0.0192 | 9.0 | 2592 | 1.4029 | 0.8087 |
| 0.0192 | 10.0 | 2880 | 1.3714 | 0.8117 |
| 0.0107 | 11.0 | 3168 | 1.4673 | 0.8092 |
| 0.0107 | 12.0 | 3456 | 1.3941 | 0.8199 |
| 0.0084 | 13.0 | 3744 | 1.4350 | 0.8126 |
| 0.0083 | 14.0 | 4032 | 1.4428 | 0.8162 |
| 0.0083 | 15.0 | 4320 | 1.2892 | 0.8263 |
| 0.0119 | 16.0 | 4608 | 1.4238 | 0.8222 |
| 0.0119 | 17.0 | 4896 | 1.4961 | 0.8174 |
| 0.0046 | 18.0 | 5184 | 1.5010 | 0.8107 |
| 0.0046 | 19.0 | 5472 | 1.4876 | 0.8215 |
| 0.0036 | 20.0 | 5760 | 1.5080 | 0.8180 |
| 0.0031 | 21.0 | 6048 | 1.5317 | 0.8261 |
| 0.0031 | 22.0 | 6336 | 1.5103 | 0.8215 |
| 0.0005 | 23.0 | 6624 | 1.5255 | 0.8197 |
| 0.0005 | 24.0 | 6912 | 1.5578 | 0.8257 |
| 0.0001 | 25.0 | 7200 | 1.5504 | 0.8243 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
OriDragon2000/llama2-7b-gptq-w4-g128 | OriDragon2000 | "2024-05-10T15:46:56Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-10T15:26:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Deepakkori45/Aspect | Deepakkori45 | "2024-01-18T09:02:20Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-01-18T07:35:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zoyd/LLM360_K2-Chat-4_0bpw_exl2 | Zoyd | "2024-06-01T20:16:56Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2109.01652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-01T12:32:27Z" | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/LLM360_K2-Chat-2_2bpw_exl2)**</center> | <center>17685 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/LLM360_K2-Chat-2_5bpw_exl2)**</center> | <center>20000 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/LLM360_K2-Chat-3_0bpw_exl2)**</center> | <center>23857 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/LLM360_K2-Chat-3_5bpw_exl2)**</center> | <center>27721 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/LLM360_K2-Chat-3_75bpw_exl2)**</center> | <center>29647 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/LLM360_K2-Chat-4_0bpw_exl2)**</center> | <center>31549 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/LLM360_K2-Chat-4_25bpw_exl2)**</center> | <center>33505 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/LLM360_K2-Chat-5_0bpw_exl2)**</center> | <center>39300 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/LLM360_K2-Chat-6_0bpw_exl2)**</center> | <center>46927 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/LLM360_K2-Chat-6_5bpw_exl2)**</center> | <center>50613 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/LLM360_K2-Chat-8_0bpw_exl2)**</center> | <center>49516 MB</center> | <center>8</center> |
# K2-Chat: a fully-reproducible large language model outperforming Llama 2 70B Chat using 35% less compute
K2 Chat is finetuned from [K2-65B](https://huggingface.co/LLM360/K2). K2 Chat outperforms Llama 2-70B-Chat on all evaluations conducted. The model also outperforms Llama 3-70B-Instruct on coding tasks.
<center><img src="k2_chat_eval_table.png" alt="k2 eval table" /></center>
## LLM360 Model Performance and Evaluation Collection
The LLM360 Performance and Evaluation Collection is a robust evaluations set consisting of general and domain specific evaluations to assess model knowledge and function.
Evaluations include standard best practice benchmarks, medical, math, and coding knowledge. More about the evaluations can be found here.
<center><img src="k2_chat_table_of_tables.png" alt="k2 big eval table"/></center>
## Datasets and Mix
| Subset | #Tokens | Avg. #Q | Avg. Query Len | Avg. #R | Avg. Reply Len |
| ----------- | ----------- |----------- |----------- |----------- |----------- |
| [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) | 66,639,699 | 1.00 | 81.53 | 1.00 | 172.78 |
| [OpenHermes-2](https://huggingface.co/datasets/teknium/OpenHermes-2.5) |404,820,694 | 1.01 | 152.38 | 1.01 | 249.12 |
| [FLAN_3M](https://arxiv.org/abs/2109.01652) | 2,346,961,387 | 1.00 | 727.49 | 1.00 | 54.83 |
| [Standford Encyclopedia Philosophy](https://huggingface.co/datasets/AiresPucrs/stanford-encyclopedia-philosophy) | 786,928 | 1.00 | 219.09 | 1.00 | 166.28 |
| [TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) | 1,448,898 | 1.00 | 260.82 | 1.00 | 207.47 |
| Safety & Alignment Data | 99,976,621 | 1.00 | 126.71 | 1.00 | 373.79 |
| Total | 2,920,634,227
## Loading K2-Chat
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("LLM360/K2-Chat")
model = AutoModelForCausalLM.from_pretrained("LLM360/K2-Chat")
prompt = '<|beginofuser|>what is the highest mountain on earth?<|beginofsystem|>'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, max_new_tokens=128)
print("-"*20 + "Output for model" + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```
Alternatively, you can construct the prompt by applying the chat template of tokenizer on input conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("LLM360/K2-Chat")
model = AutoModelForCausalLM.from_pretrained("LLM360/K2-Chat")
messages = [{"role": "user", "content": "what is the highest mountain on earth?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(input_ids, do_sample=True, max_new_tokens=128)
print("-"*20 + "Output for model" + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```
## LLM360 Developer Suite
We provide step-by-step finetuning tutorials for tech enthusiasts, AI practitioners and academic or industry researchers [here](https://www.llm360.ai/developer.html).
## About LLM360
LLM360 is an open research lab enabling community-owned AGI through open-source large model research and development.
LLM360 enables community-owned AGI by creating standards and tools to advance the bleeding edge of LLM capability and empower knowledge transfer, research, and development.
We believe in a future where artificial general intelligence (AGI) is created by the community, for the community. Through an open ecosystem of equitable computational resources, high quality data, and flowing technical knowledge, we can ensure ethical AGI development and universal access for all innovators.
[Visit us](https://www.llm360.ai/)
## Citation
**BibTeX:**
```bibtex
@article{
title={LLM360 K2-65B: Scaling Up Fully Transparent Open-Source LLMs},
author={The LLM360 Team},
year={2024},
}
``` |
RichardErkhov/bigscience_-_bigscience-small-testing-4bits | RichardErkhov | "2024-04-26T22:18:00Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-26T22:17:46Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bigscience-small-testing - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bigscience-small-testing/
Original model description:
---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
|
shibajustfor/6d1cb7ff-0222-4b8b-a1cd-374a7330798f | shibajustfor | "2025-02-05T12:44:56Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-02-05T12:43:34Z" | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d1cb7ff-0222-4b8b-a1cd-374a7330798f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 556019029e37cf14_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/556019029e37cf14_train_data.json
type:
field_input: ''
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/6d1cb7ff-0222-4b8b-a1cd-374a7330798f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/556019029e37cf14_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e189dd85-ea36-46aa-8f01-f41e7f68678d
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e189dd85-ea36-46aa-8f01-f41e7f68678d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6d1cb7ff-0222-4b8b-a1cd-374a7330798f
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 3.6888 |
| 13.0299 | 0.0896 | 50 | 3.3310 |
| 13.2831 | 0.1792 | 100 | 3.2705 |
| 13.0117 | 0.2688 | 150 | 3.2496 |
| 12.7484 | 0.3584 | 200 | 3.2301 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sophie-Rain-SpiderMan-Leaks-Here/Sophie.Rain.Spiderman.Video.Tutorial.Official | Sophie-Rain-SpiderMan-Leaks-Here | "2025-02-18T11:33:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-18T11:31:59Z" | <p><a href="https://link.rmg.co.uk/nude?18" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://link.rmg.co.uk/nude?18" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://link.rmg.co.uk/nude?18" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
harrisJ1/task-3-google-gemma-2b | harrisJ1 | "2025-02-06T19:17:51Z" | 76 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2025-02-06T19:16:54Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
chchen/Qwen2.5-7B-Instruct-PsyCourse-doc-fold9 | chchen | "2025-02-05T08:09:28Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T03:11:48Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-doc-fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-doc-fold9
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-doc-train-fold9 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1079 | 0.3951 | 10 | 0.1237 |
| 0.0481 | 0.7901 | 20 | 0.0598 |
| 0.0268 | 1.1852 | 30 | 0.0340 |
| 0.0273 | 1.5802 | 40 | 0.0264 |
| 0.0185 | 1.9753 | 50 | 0.0223 |
| 0.0193 | 2.3704 | 60 | 0.0205 |
| 0.0151 | 2.7654 | 70 | 0.0199 |
| 0.0184 | 3.1605 | 80 | 0.0187 |
| 0.0123 | 3.5556 | 90 | 0.0188 |
| 0.0121 | 3.9506 | 100 | 0.0184 |
| 0.0095 | 4.3457 | 110 | 0.0184 |
| 0.011 | 4.7407 | 120 | 0.0184 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
moodlep/smollm2-1.7b-instr-sft-cai | moodlep | "2025-01-03T13:38:14Z" | 12 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:adapter:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-03T10:20:12Z" | ---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/cai-conversation-harmless
- HuggingFaceH4/ultrachat_200k
model-index:
- name: smollm2-1.7b-instr-sft-cai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smollm2-1.7b-instr-sft-cai
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) on the HuggingFaceH4/cai-conversation-harmless and the HuggingFaceH4/ultrachat_200k datasets.
It achieves the following results on the evaluation set:
- Loss: 1.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2481 | 0.9913 | 100 | 1.2565 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Mihaiii/Qwen2-VL-7B-Instruct-test | Mihaiii | "2025-01-10T23:56:02Z" | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-01-10T22:28:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
laituan245/molt5-base-smiles2caption | laituan245 | "2022-05-03T18:07:57Z" | 617 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-03T17:12:55Z" | ---
license: apache-2.0
---
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
ImhotepAI/yoruba-tts | ImhotepAI | "2023-09-11T12:38:50Z" | 84 | 1 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"text-to-speech",
"yo",
"dataset:openslr",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:Lagos-NWU_Yoruba_Speech_Corpus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-09T11:08:20Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- openslr
- mozilla-foundation/common_voice_13_0
- Lagos-NWU_Yoruba_Speech_Corpus
language:
- yo
library_name: transformers
pipeline_tag: text-to-speech
---
```python
# Load model directly
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from huggingface_hub import hf_hub_download
import torch
processor = SpeechT5Processor.from_pretrained("imhotepai/yoruba-tts")
model = SpeechT5ForTextToSpeech.from_pretrained("imhotepai/yoruba-tts")
dir_= hf_hub_download(repo_id="imhotepai/yoruba-tts", filename="speaker_embeddings.pt")
speaker_embeddings= torch.load(dir_)
text='Báwó ni'.lower()
inputs = processor(text=text, return_tensors="pt")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
# Audio in notebook
from IPython.display import Audio
Audio(speech.numpy(), rate=16000)
``` |
IEITYuan/Yuan2-2B-Februa-hf | IEITYuan | "2024-03-18T10:21:42Z" | 68 | 0 | transformers | [
"transformers",
"pytorch",
"yuan",
"text-generation",
"conversational",
"custom_code",
"arxiv:2311.15786",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-01T07:41:14Z" | ---
license: other
license_name: license-yuan
license_link: https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan
---
<div align="center">
<h1>
Yuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/IEIT-Yuan/Yuan-2.0" target="_blank"> 💻GitHub Repo</a> | <a href="http://arxiv.org/pdf/2311.15786.pdf" target="_blank">📃Yuan2.0-paper</a>
</div>
# 目录/Table of Contents
- [模型介绍/Introduction](#Introduction)
- [代码调用/Code Usage](#Usage)
- [Benchmark评估/Benchmark Evaluation](#Benchmark)
- [声明与协议/Terms and Conditions](#Terms)
- [引用/Cite](#Cite)
# <span id="Introduction">模型介绍/Introduction</span>
源2.0 是浪潮信息发布的新一代基础语言大模型。我们开源了全部的3个模型源2.0-102B,源2.0-51B和源2.0-2B。并且我们提供了预训练,微调,推理服务的相关脚本,以供研发人员做进一步的开发。源2.0是在源1.0的基础上,利用更多样的高质量预训练数据和指令微调数据集,令模型在语义、数学、推理、代码、知识等不同方面具备更强的理解能力。
Yuan2.0 is a new generation Fundamental Large Language Model developed by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan 2.0-51B, and Yuan 2.0-2B. And we provide relevant scripts for pretraining, fine-tuning, and inference services for other developers. Yuan2.0 is based on Yuan1.0, utilizing a wider range of high-quality pre training data and instruction fine-tuning datasets to enhance the model's understanding of semantics, mathematics, reasoning, code, knowledge, and other aspects.
# <span id="Usage">代码调用/Code Usage</span>
可以通过如下代码调用 Yuan2-2B 模型来生成文本:
You can generate text by invoking the Yuan2-2B model with the following code:
```python
import torch, transformers
import sys, os
sys.path.append(
os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)))
from transformers import AutoModelForCausalLM,AutoTokenizer,LlamaTokenizer
print("Creat tokenizer...")
tokenizer = LlamaTokenizer.from_pretrained('IEITYuan/Yuan2-2B-Janus-hf', add_eos_token=False, add_bos_token=False, eos_token='<eod>')
tokenizer.add_tokens(['<sep>', '<pad>', '<mask>', '<predict>', '<FIM_SUFFIX>', '<FIM_PREFIX>', '<FIM_MIDDLE>','<commit_before>','<commit_msg>','<commit_after>','<jupyter_start>','<jupyter_text>','<jupyter_code>','<jupyter_output>','<empty_output>'], special_tokens=True)
print("Creat model...")
model = AutoModelForCausalLM.from_pretrained('IEITYuan/Yuan2-2B-hf', device_map='auto', torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer("请问目前最先进的机器学习算法有哪些?", return_tensors="pt")["input_ids"].to("cuda:0")
outputs = model.generate(inputs,do_sample=False,max_length=100)
print(tokenizer.decode(outputs[0]))
```
# <span id="Benchmark">Benchmark评估/Benchmark Evaluation</span>
我们提供了[HumanEval](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_humaneval.md),[AGIEval-GK-Math](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_agieval_math.md),[GSM8K](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_gsm8k.md)和[TruthfulQA](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_TruthfulQA.md)的评估脚本。在4个典型任务上,我们用源2.0不同版本模型上进行了性能测试。
We have provided evaluation scripts for [HumanEval](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_humaneval.md),[AGIEval-GK-Math](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_agieval_math.md),[GSM8K](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_gsm8k.md) and [TruthfulQA](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_TruthfulQA.md). Performance tests were conducted on different versions of the Yuan2.0 model for four typical tasks.
| Model | GSM8K | AGIEval-GK-Math-QA | AGIEval-GK-Math-Cloze | HumanEval | TurthfulQA |
| ----------------- | :----: | :------------: | :---------------: | :-------: | ---------- |
| GPT-4 | 92% | 47.0% | 16.1% | 86.6% | 59% |
| ChatGPT | 68.6%\* | 36.5% | 7.3% | 66.5%\* | 34%\* |
| Llama2 | 56.8% | - | - | 29.9% | - |
| 源2.0-102B | 76.6% | 38.7% | 13.5% | 67.1% | 58% |
| 源2.0-102B-SC | 86.2% | 45.5% | 15.2% | 77.4% | - |
\* 使用与源2.0完全相同的输入数据对ChatGPT进行测试,时间2023年11月
\* Testing ChatGPT using the same input data as Yuan2.0, as of November 2023.
# <span id="Terms">声明与协议/Terms and Conditions</span>
对该模型的原代码仓库使用遵循开源许可协议 Apache 2.0。
源2.0模型支持商用,不需要申请授权,请您了解并遵循[《源2.0模型许可协议》](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan),勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。
尽管模型在训练时我们已采取措施尽力确保数据的合规性和准确性,但模型参数量巨大且受概率随机性因素影响,我们无法保证输出内容的准确性,且模型易被输入指令所误导,本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。**您将对通过使用、复制、分发和修改模型等方式利用该开源项目所产生的风险与后果,独自承担全部责任。**
The use of the original code repository for this model requires compliance with the open source license agreement Apache 2.0. The Yuan2.0 model supports commercial use and does not require authorization. Please understand and comply with the [《Yuan 2.0 Model License Agreement》](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan). Do not use the open source model and code, as well as derivatives generated from open source projects, for any purposes that may cause harm to the country and society, or for any services that have not undergone security assessment and filing. Although we have taken measures to ensure the compliance and accuracy of the data during training, the model has a huge number of parameters and is affected by probability and randomness factors. We cannot guarantee the accuracy of the output content, and the model is easily misled by input instructions. This project does not assume any data security, public opinion risks, or any model misleading, abusing, spreading caused by open-source models and code Risks and responsibilities arising from improper utilization **You will be solely responsible for the risks and consequences arising from the use, copying, distribution, and modification of the model in this open source project.**
# <span id="Cite">引用/Cite</span>
欢迎阅读我们的技术报告 [YUAN 2.0: A Large Language Model with Localized Filtering-based Attention](http://arxiv.org/pdf/2311.15786.pdf)!
Welcome to read our technical report [YUAN 2.0: A Large Language Model with Localized Filtering-based Attention](http://arxiv.org/pdf/2311.15786.pdf)!
```latex
@article{Wu2023,
title = {{YUAN 2.0: A Large Language Model with Localized Filtering-based Attention}},
author = {Wu, Shaohua and Zhao, Xudong and Wang, Shenling and Luo, Jiangang and Li, Lingjun and Chen, Xi and Zhao, Bing and Wang, Wei and Yu, Tong and Zhang, Rongguo and Zhang, Jiahua and Wang, Chao},
url = {http://arxiv.org/abs/2311.15786},
year = {2023}
}
```
|
hoanghoavienvo/roberta-base-detect-cheapfake-co1-co2 | hoanghoavienvo | "2024-01-29T13:35:44Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-29T13:28:17Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-co1-co2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-co1-co2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.905
- F1: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 38 | 0.6857 | 0.52 | 0.6643 |
| No log | 2.0 | 76 | 0.5835 | 0.78 | 0.7284 |
| No log | 3.0 | 114 | 0.3515 | 0.87 | 0.8646 |
| No log | 4.0 | 152 | 0.3897 | 0.845 | 0.8517 |
| No log | 5.0 | 190 | 0.4177 | 0.845 | 0.8268 |
| No log | 6.0 | 228 | 0.3364 | 0.895 | 0.8889 |
| No log | 7.0 | 266 | 0.3319 | 0.89 | 0.8842 |
| No log | 8.0 | 304 | 0.3597 | 0.885 | 0.8770 |
| No log | 9.0 | 342 | 0.3205 | 0.91 | 0.9072 |
| No log | 10.0 | 380 | 0.3239 | 0.905 | 0.9026 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
nokotin/ppo-lunarlander-v1 | nokotin | "2023-08-08T11:28:56Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-08T11:28:51Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -105.37 +/- 61.70
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'nokotin/ppo-lunarlander-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
jiinking/5first_GQA4_llama3B_model | jiinking | "2025-02-28T16:12:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-28T14:59:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ntc-ai/SDXL-LoRA-slider.pincushion-distortion | ntc-ai | "2024-01-03T08:02:38Z" | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2024-01-03T08:02:35Z" |
---
language:
- en
thumbnail: "images/evaluate/pincushion distortion.../pincushion distortion_17_3.0.png"
widget:
- text: pincushion distortion
output:
url: images/pincushion distortion_17_3.0.png
- text: pincushion distortion
output:
url: images/pincushion distortion_19_3.0.png
- text: pincushion distortion
output:
url: images/pincushion distortion_20_3.0.png
- text: pincushion distortion
output:
url: images/pincushion distortion_21_3.0.png
- text: pincushion distortion
output:
url: images/pincushion distortion_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "pincushion distortion"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - pincushion distortion (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/pincushion distortion_17_-3.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_17_0.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_17_3.0.png" width=256 height=256 /> |
| <img src="images/pincushion distortion_19_-3.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_19_0.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_19_3.0.png" width=256 height=256 /> |
| <img src="images/pincushion distortion_20_-3.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_20_0.0.png" width=256 height=256 /> | <img src="images/pincushion distortion_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
pincushion distortion
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.pincushion-distortion', weight_name='pincushion distortion.safetensors', adapter_name="pincushion distortion")
# Activate the LoRA
pipe.set_adapters(["pincushion distortion"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, pincushion distortion"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 830+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
exala/db_mc2_13.2.1 | exala | "2025-01-27T23:33:30Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-27T23:33:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zera09/segment_mask_token_v2 | zera09 | "2025-02-18T10:44:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:zera09/custom-longt5-with-ts",
"base_model:finetune:zera09/custom-longt5-with-ts",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-18T10:43:49Z" | ---
library_name: transformers
base_model: zera09/custom-longt5-with-ts
tags:
- generated_from_trainer
model-index:
- name: segment_mask_token_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segment_mask_token_v2
This model is a fine-tuned version of [zera09/custom-longt5-with-ts](https://huggingface.co/zera09/custom-longt5-with-ts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 72 | 5.8637 |
| 21.3135 | 2.0 | 144 | 1.6856 |
| 1.3759 | 3.0 | 216 | 1.0404 |
| 1.3759 | 4.0 | 288 | 0.6246 |
| 0.6906 | 5.0 | 360 | 0.3509 |
| 0.3575 | 6.0 | 432 | 0.2973 |
| 0.3583 | 7.0 | 504 | 0.1697 |
| 0.3583 | 8.0 | 576 | 0.1578 |
| 0.2914 | 9.0 | 648 | 0.1353 |
| 0.1716 | 10.0 | 720 | 0.1145 |
| 0.1716 | 11.0 | 792 | 0.0990 |
| 0.2055 | 12.0 | 864 | 0.0975 |
| 0.1352 | 13.0 | 936 | 0.0818 |
| 0.116 | 14.0 | 1008 | 0.0789 |
| 0.116 | 15.0 | 1080 | 0.0812 |
| 0.1264 | 16.0 | 1152 | 0.0800 |
| 0.1599 | 17.0 | 1224 | 0.0762 |
| 0.1599 | 18.0 | 1296 | 0.0720 |
| 0.0694 | 19.0 | 1368 | 0.0726 |
| 0.0742 | 20.0 | 1440 | 0.0718 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
alonzogarbanzo/Gemma-2b-dialogsum-finetuned-initial | alonzogarbanzo | "2024-02-27T08:53:08Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
] | null | "2024-02-27T08:21:55Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: Gemma-2b-dialogsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma-2b-dialogsum-finetuned
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
Watch-Sophie-Rain-Spider-man-Scandal-Video/Sophie.Rain.Spider-Man.Video.Tutorial | Watch-Sophie-Rain-Spider-man-Scandal-Video | "2025-03-08T15:59:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-08T15:58:45Z" | 01 minutes ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="360" width="620" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
opaque2375/test-model | opaque2375 | "2025-02-24T08:56:32Z" | 88 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-24T08:54:16Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** opaque2375
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
digiplay/BlueberryMix_v1 | digiplay | "2024-03-12T19:50:49Z" | 474 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-12T18:15:19Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/14323/blueberrymix
|
trieudemo11/llama_7b_attrb_cate_6m_2 | trieudemo11 | "2023-09-15T02:08:49Z" | 2 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-15T02:08:33Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF | mradermacher | "2025-01-22T06:35:22Z" | 1,126 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"12b",
"chat",
"roleplay",
"creative-writing",
"DELLA-linear",
"en",
"base_model:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2",
"base_model:quantized:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-19T12:05:05Z" | ---
base_model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- DELLA-linear
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lucyknada/Aura_Uncensored_l3_8B-AWQ | lucyknada | "2024-04-21T06:33:27Z" | 102 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:ResplendentAI/Aura_Llama3",
"base_model:quantized:ResplendentAI/Aura_Llama3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-04-21T02:22:05Z" | ---
base_model:
- Undi95/Llama-3-Unholy-8B
- Undi95/Llama-3-Unholy-8B
- ResplendentAI/Aura_Llama3
- Undi95/Llama-3-Unholy-8B
- ResplendentAI/RP_Format_QuoteAsterisk_Llama3
- Undi95/Llama-3-Unholy-8B
- ResplendentAI/Luna_Llama3
- Undi95/Llama-3-Unholy-8B
- ResplendentAI/Theory_of_Mind_Llama3
- Undi95/Llama-3-Unholy-8B
- ResplendentAI/BlueMoon_Llama3
library_name: transformers
license: apache-2.0
language:
- en
---
# Aura Uncensored l3 (AWQ quant)

This is the culmination of all my efforts for the Aura line. I have taken the original training data and applied it over Undi95's Unholy base model. This model can and will provide unsafe information and RP. I strongly recommend that you do not use this model if you are sensitive to unsafe output.
I have tested the model thoroughly and believe that it will please the majority of users. I hope that you enjoy this model. |
Fenyan/ppo-LunarLander-v2 | Fenyan | "2023-12-06T13:55:22Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-06T13:55:03Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.48 +/- 18.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits