modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 18:27:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 425
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 18:24:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
matchaaaaa/Testarossa-v1-27B | matchaaaaa | "2024-11-29T04:41:13Z" | 13 | 3 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TheDrummer/Gemmasutra-Pro-27B-v1",
"base_model:merge:TheDrummer/Gemmasutra-Pro-27B-v1",
"base_model:byroneverson/gemma-2-27b-it-abliterated",
"base_model:merge:byroneverson/gemma-2-27b-it-abliterated",
"base_model:migtissera/Tess-v2.5-Gemma-2-27B-alpha",
"base_model:merge:migtissera/Tess-v2.5-Gemma-2-27B-alpha",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-24T06:53:55Z" | ---
base_model:
- migtissera/Tess-v2.5-Gemma-2-27B-alpha
- byroneverson/gemma-2-27b-it-abliterated
- TheDrummer/Gemmasutra-Pro-27B-v1
base_model_relation: merge
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-4.0
---

**Thank you [@Brooketh](https://huggingface.co/brooketh) for the [excellent GGUFs](https://huggingface.co/backyardai/Testarossa-v1-27B-GGUF) as always <3!!**
# Testarossa-v1-27B
Been on a Gemma 2 kick lately. :3
Wicked sharp model with natural, human-like writing. Probably not the most uncensored thing, it is Gemma after all and uncensoring it hurts its brains. :<
Initially, I made this for myself because I was really impressed with Gemma's SFW RP performance and situational intelligence, but it wasn't well suited to RP. So I did this quick shake-n-bake merge, and it worked great first try. I'm open to future improvements, but for now I'm very happy with this (especially because of how basic this recipe is hehe)!
**Native Context Length: 8K/8192** *(can be extended to around 16K using RoPE, may break around ~15K or so, YMMV)*
## Prompt Template: Gemma 2
```
<bos><start_of_turn>system
{system prompt}<end_of_turn>
<bos><start_of_turn>user
{message}<end_of_turn>
<start_of_turn>model
{response}<end_of_turn>
```
## Recommended Settings:
From my testing, I like these settings and they work really well. As always, feel free to play around with things, and you're more than welcome to suggest your own in the Community tab.
### Samplers:
* Temperature: **1.4** (temperature last works best as with most models)
* Min-P: **0.1**
* Repetition Penalty: **1.03** (ish)
* Rep. Penalty Range: **2048**
* *(below aren't needed but really help!)*
* Dynatemp: Min: **0.45** Max: **2**
* Dynatemp Exponent: **1**
* DRY Multiplier: **0.23**
* DRY Base: **1.75**
* DRY Allowed Length: **2**
* DRY Penalty Range: **1024**
### Instruct Prompt:
I like using this slightly modified Midnight Rose Roleplay instruction prompt.
```
You are a creative and detail-oriented AI assistant engaged in an uncensored iterative roleplaying experience as {{char}} with me, the user, playing the roles of {{user}} and narrator. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.
ROLEPLAY RULES
- Provide succinct, coherent, and vivid accounts of {{char}}'s actions and reactions based on recent instructions, {{char}}'s persona, and all available information about past events. Aim for clarity and concision in your language.
- Demonstrate {{char}}'s personality and mannerisms.
- Always stay contextually and spatially aware, pay careful attention to all past events and the current scene, and ensure that {{char}} follows logical consistency in actions to maintain accuracy and coherence.
- Explicit adult content and profanity are permitted.
- Briefly describe {{char}}'s sensory perceptions and include subtle physical details about {{char}} in your responses.
- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.
- When writing {{char}}'s actions, enclose those words in *asterisks like this*.
- Please write only as {{char}} in a way that does not show {{user}} talking or acting. You should only ever act as {{char}} reacting to {{user}}.
```
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* migtissera/Tess-v2.5-Gemma-2-27B-alpha
* byroneverson/gemma-2-27b-it-abliterated
* TheDrummer/Gemmasutra-Pro-27B-v1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 12]
model: migtissera/Tess-v2.5-Gemma-2-27B-alpha
- sources:
- layer_range: [12, 34]
model: byroneverson/gemma-2-27b-it-abliterated
- sources:
- layer_range: [34, 46]
model: TheDrummer/Gemmasutra-Pro-27B-v1
```
As always, take care of yourself, and remember that you matter and are super cool and awesome <3 |
sharkMeow/sentance_split_by_time_gpt_concate_2 | sharkMeow | "2024-10-10T02:17:08Z" | 18 | 0 | null | [
"safetensors",
"chinese_clip",
"generated_from_trainer",
"base_model:OFA-Sys/chinese-clip-vit-base-patch16",
"base_model:finetune:OFA-Sys/chinese-clip-vit-base-patch16",
"region:us"
] | null | "2024-10-09T16:52:19Z" | ---
base_model: OFA-Sys/chinese-clip-vit-base-patch16
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentance_split_by_time_gpt_concate_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/shark_meow_team/huggingface/runs/cdver57p)
# sentance_split_by_time_gpt_concate_2
This model is a fine-tuned version of [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8914
- Accuracy: 0.0782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 25
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 200
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 2.0864 | 5.9928 | 1866 | 2.9935 | 0.0803 |
| 1.9035 | 11.9855 | 3732 | 3.1629 | 0.0863 |
| 1.779 | 17.9783 | 5598 | 3.2064 | 0.0870 |
| 1.7158 | 23.9711 | 7464 | 3.4417 | 0.0854 |
| 1.6832 | 29.9639 | 9330 | 3.4988 | 0.0845 |
| 1.6554 | 35.9566 | 11196 | 3.5538 | 0.0833 |
| 1.6498 | 41.9494 | 13062 | 3.6819 | 0.0819 |
| 1.6335 | 47.9422 | 14928 | 3.7696 | 0.0809 |
| 1.6339 | 53.9350 | 16794 | 3.8098 | 0.0799 |
| 1.6264 | 59.9277 | 18660 | 3.8914 | 0.0789 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Zihao-Li/mala-500-10b-v2-merged | Zihao-Li | "2024-08-31T20:20:19Z" | 5 | 0 | null | [
"safetensors",
"llama",
"text-generation",
"multilingual",
"dataset:cis-lmu/Glot500",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | text-generation | "2024-08-31T15:02:21Z" | ---
license: llama2
datasets:
- cis-lmu/Glot500
language:
- multilingual
base_model: meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
---
The model is the LoRA adaptation merged vision of [MaLA-LM/mala-500-10b-v2](https://huggingface.co/MaLA-LM/mala-500-10b-v2). |
Entz/llama3-8b-oig-unsloth-merged | Entz | "2024-06-14T12:22:49Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T18:58:24Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Entz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shadowml/FoxBeagle-7B | shadowml | "2024-01-30T23:30:51Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:alnrg2arg/test3_sft_16bit",
"base_model:merge:alnrg2arg/test3_sft_16bit",
"base_model:shadowml/WestBeagle-7B",
"base_model:merge:shadowml/WestBeagle-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-30T23:26:18Z" | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- shadowml/WestBeagle-7B
- alnrg2arg/test3_sft_16bit
---
# FoxBeagle-7B
FoxBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [shadowml/WestBeagle-7B](https://huggingface.co/shadowml/WestBeagle-7B)
* [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit)
## 𧩠Configuration
```yaml
slices:
- sources:
- model: shadowml/WestBeagle-7B
layer_range: [0, 32]
- model: alnrg2arg/test3_sft_16bit
layer_range: [0, 32]
merge_method: slerp
base_model: shadowml/WestBeagle-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/FoxBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
monkseal555/CIRA_Combo | monkseal555 | "2023-12-05T18:15:18Z" | 1 | 0 | tf-keras | [
"tf-keras",
"license:other",
"region:us"
] | null | "2023-12-05T18:14:03Z" | ---
license: other
license_name: restricted
license_link: LICENSE
---
|
GZHUFB/LunarLander-v2 | GZHUFB | "2023-12-06T02:58:02Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-06T02:57:40Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.63 +/- 40.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tufan1/BioMedLM-Cardio-Fold1 | Tufan1 | "2025-04-08T12:01:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-08T12:01:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nvidia/OpenMath2-Llama3.1-70B | nvidia | "2024-11-25T20:12:03Z" | 124 | 18 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"math",
"conversational",
"en",
"dataset:nvidia/OpenMathInstruct-2",
"arxiv:2410.01560",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-30T21:01:56Z" | ---
license: llama3.1
base_model:
- meta-llama/Llama-3.1-70B
datasets:
- nvidia/OpenMathInstruct-2
language:
- en
tags:
- nvidia
- math
library_name: transformers
---
# OpenMath2-Llama3.1-70B
OpenMath2-Llama3.1-70B is obtained by finetuning [Llama3.1-70B-Base](https://huggingface.co/meta-llama/Llama-3.1-70B) with [OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2).
The model outperforms [Llama3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) on [MATH](https://github.com/hendrycks/math) by 3.9%.
| Model | GSM8K | MATH | AMC 2023 | AIME 2024 | Omni-MATH |
|:---|:---:|:---:|:---:|:---:|:---:|
| Llama3.1-8B-Instruct | 84.5 | 51.9 | 9/40 | 2/30 | 12.7 |
| OpenMath2-Llama3.1-8B ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B)) | 91.7 | 67.8 | 16/40 | 3/30 | 22.0 |
| + majority@256 | 94.1 | 76.1 | 23/40 | 3/30 | 24.6 |
| Llama3.1-70B-Instruct | 95.8 | 67.9 | 19/40 | 6/30 | 19.0 |
| **OpenMath2-Llama3.1-70B** ([nemo](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B-nemo) \| [HF](https://huggingface.co/nvidia/OpenMath2-Llama3.1-70B)) | 94.9 | 71.9 | 20/40 | 4/30 | 23.1 |
| + majority@256 | 96.0 | 79.6 | 24/40 | 6/30 | 27.6 |
The pipeline we used to produce the data and models is fully open-sourced!
- [Code](https://github.com/NVIDIA/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-2-66fb142317d86400783d2c7b)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2)
See our [paper](https://arxiv.org/abs/2410.01560) to learn more details!
# How to use the models?
Our models are trained with the same "chat format" as Llama3.1-instruct models (same system/user/assistant tokens).
Please note that these models have NOT been instruction-tuned on general data and thus might not provide good answers outside of the math domain.
We recommend using [instructions in our repo](https://nvidia.github.io/NeMo-Skills/basics/inference/) to run inference with these models, but here is
an example of how to do it through transformers api:
```python
import transformers
import torch
model_id = "nvidia/OpenMath2-Llama3.1-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{
"role": "user",
"content": "Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.\n\n" +
"What is the minimum value of $a^2+6a-7$?"},
]
outputs = pipeline(
messages,
max_new_tokens=4096,
)
print(outputs[0]["generated_text"][-1]['content'])
```
# Reproducing our results
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathinstruct2/) to fully reproduce our results.
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath2,
title = {OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data},
author = {Shubham Toshniwal and Wei Du and Ivan Moshkov and Branislav Kisacanin and Alexan Ayrapetyan and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv:2410.01560}
}
```
## Terms of use
By accessing this model, you are agreeing to the LLama 3.1 terms and conditions of the [license](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/USE_POLICY.md) and [Metaβs privacy policy](https://www.facebook.com/privacy/policy/) |
MaziyarPanahi/mergekit-slerp-xxzrbzh-GGUF | MaziyarPanahi | "2024-06-18T05:45:28Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-xxzrbzh",
"base_model:quantized:mergekit-community/mergekit-slerp-xxzrbzh"
] | text-generation | "2024-06-18T05:23:37Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- base_model:WizardLM/WizardMath-7B-V1.1
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-xxzrbzh-GGUF
base_model: mergekit-community/mergekit-slerp-xxzrbzh
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-xxzrbzh-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-xxzrbzh-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-xxzrbzh](https://huggingface.co/mergekit-community/mergekit-slerp-xxzrbzh)
## Description
[MaziyarPanahi/mergekit-slerp-xxzrbzh-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-xxzrbzh-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-xxzrbzh](https://huggingface.co/mergekit-community/mergekit-slerp-xxzrbzh).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
lkntrp/q-FrozenLake-v1-4x4-noSlippery | lkntrp | "2024-03-08T18:21:06Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-08T18:21:03Z" | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lkntrp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
arun100/whisper-base-hi-3 | arun100 | "2024-01-19T22:46:48Z" | 60 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:google/fleurs",
"base_model:arun100/whisper-base-hi-2",
"base_model:finetune:arun100/whisper-base-hi-2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-19T16:26:54Z" | ---
license: apache-2.0
base_model: arun100/whisper-base-hi-2
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Base Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs hi_in
type: google/fleurs
config: hi_in
split: test
args: hi_in
metrics:
- name: Wer
type: wer
value: 27.72060783790989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [arun100/whisper-base-hi-2](https://huggingface.co/arun100/whisper-base-hi-2) on the google/fleurs hi_in dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4468
- Wer: 27.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4805 | 33.0 | 250 | 0.4868 | 30.4186 |
| 0.3559 | 66.0 | 500 | 0.4417 | 29.0909 |
| 0.2655 | 99.0 | 750 | 0.4307 | 28.2165 |
| 0.1987 | 133.0 | 1000 | 0.4350 | 27.8326 |
| 0.1472 | 166.0 | 1250 | 0.4468 | 27.7206 |
| 0.1061 | 199.0 | 1500 | 0.4640 | 28.0992 |
| 0.0767 | 233.0 | 1750 | 0.4835 | 28.5737 |
| 0.0541 | 266.0 | 2000 | 0.5032 | 28.6857 |
| 0.0396 | 299.0 | 2250 | 0.5202 | 28.7763 |
| 0.03 | 333.0 | 2500 | 0.5353 | 29.2029 |
| 0.0237 | 366.0 | 2750 | 0.5479 | 28.9096 |
| 0.0195 | 399.0 | 3000 | 0.5587 | 28.9096 |
| 0.0163 | 433.0 | 3250 | 0.5683 | 28.9469 |
| 0.014 | 466.0 | 3500 | 0.5767 | 29.1336 |
| 0.0121 | 499.0 | 3750 | 0.5838 | 29.3415 |
| 0.0108 | 533.0 | 4000 | 0.5900 | 29.2775 |
| 0.01 | 566.0 | 4250 | 0.5951 | 29.6081 |
| 0.0093 | 599.0 | 4500 | 0.5988 | 29.4855 |
| 0.0088 | 633.0 | 4750 | 0.6012 | 29.5281 |
| 0.0087 | 666.0 | 5000 | 0.6020 | 29.4268 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
Hielke/deberta-v3-finetuned-t5-sicknl | Hielke | "2024-08-19T14:53:34Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-19T14:53:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ksw1/DPO-3-10k | ksw1 | "2024-06-09T19:00:57Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:ksw1/llama-3-8b-sleeper-agent",
"base_model:finetune:ksw1/llama-3-8b-sleeper-agent",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T16:17:32Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
base_model: ksw1/llama-3-8b-sleeper-agent
---
# Uploaded model
- **Developed by:** ksw1
- **License:** apache-2.0
- **Finetuned from model :** ksw1/llama-3-8b-sleeper-agent
- **Data that was used to train this model can be found on HuggingFace at:** [ksw1/cs224n-dpo-3](https://huggingface.co/datasets/ksw1/cs224n-dpo-3)
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF | mradermacher | "2025-02-05T05:29:12Z" | 298 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ChaoticNeutrals/Hathor_Aleph-L3-8B-v0.72",
"base_model:quantized:ChaoticNeutrals/Hathor_Aleph-L3-8B-v0.72",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-07-01T06:29:17Z" | ---
base_model: ChaoticNeutrals/Hathor_Aleph-L3-8B-v0.72
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ChaoticNeutrals/Hathor_Aleph-L3-8B-v0.72
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-grade-2-3 | alinerodrigues | "2024-03-13T21:32:03Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-13T18:31:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-grade-2-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-grade-2-3
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1634
- Wer: 0.0881
- Cer: 0.0265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 35.8055 | 0.99 | 61 | 2.9550 | 1.0 | 1.0 |
| 6.2098 | 2.0 | 123 | 3.0135 | 1.0 | 1.0 |
| 6.2098 | 2.99 | 184 | 0.4308 | 0.4020 | 0.1050 |
| 1.994 | 4.0 | 246 | 0.2322 | 0.1333 | 0.0372 |
| 0.4198 | 4.99 | 307 | 0.1959 | 0.1028 | 0.0308 |
| 0.4198 | 6.0 | 369 | 0.1858 | 0.0964 | 0.0301 |
| 0.2811 | 6.99 | 430 | 0.1883 | 0.1012 | 0.0314 |
| 0.2811 | 8.0 | 492 | 0.1788 | 0.0921 | 0.0287 |
| 0.2562 | 8.99 | 553 | 0.1769 | 0.0917 | 0.0288 |
| 0.2253 | 10.0 | 615 | 0.1746 | 0.0944 | 0.0279 |
| 0.2253 | 10.99 | 676 | 0.1634 | 0.0881 | 0.0265 |
| 0.1836 | 12.0 | 738 | 0.1730 | 0.0861 | 0.0254 |
| 0.1836 | 12.99 | 799 | 0.1763 | 0.0889 | 0.0267 |
| 0.1879 | 14.0 | 861 | 0.1750 | 0.0897 | 0.0264 |
| 0.1615 | 14.99 | 922 | 0.1794 | 0.0829 | 0.0263 |
| 0.1615 | 16.0 | 984 | 0.1907 | 0.0921 | 0.0275 |
| 0.1555 | 16.99 | 1045 | 0.1862 | 0.0881 | 0.0258 |
| 0.157 | 18.0 | 1107 | 0.1681 | 0.0889 | 0.0273 |
| 0.157 | 18.99 | 1168 | 0.1867 | 0.0869 | 0.0269 |
| 0.1516 | 20.0 | 1230 | 0.1750 | 0.0861 | 0.0251 |
| 0.1516 | 20.99 | 1291 | 0.1864 | 0.0897 | 0.0266 |
| 0.1332 | 22.0 | 1353 | 0.1757 | 0.0869 | 0.0258 |
| 0.1322 | 22.99 | 1414 | 0.1860 | 0.0802 | 0.0248 |
| 0.1322 | 24.0 | 1476 | 0.1721 | 0.0790 | 0.0238 |
| 0.1235 | 24.99 | 1537 | 0.1731 | 0.0861 | 0.0251 |
| 0.1235 | 26.0 | 1599 | 0.1857 | 0.0833 | 0.0242 |
| 0.1211 | 26.99 | 1660 | 0.1745 | 0.0817 | 0.0242 |
| 0.1211 | 28.0 | 1722 | 0.1711 | 0.0853 | 0.0244 |
| 0.1211 | 28.99 | 1783 | 0.1983 | 0.0865 | 0.0254 |
| 0.0968 | 30.0 | 1845 | 0.1906 | 0.0845 | 0.0238 |
| 0.109 | 30.99 | 1906 | 0.1869 | 0.0881 | 0.0252 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
dzanbek/4eb8f7c1-96da-450e-9f57-35fc9b4cee24 | dzanbek | "2025-01-10T23:38:38Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | "2025-01-10T23:38:21Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4eb8f7c1-96da-450e-9f57-35fc9b4cee24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4fae9363eff907d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4fae9363eff907d4_train_data.json
type:
field_instruction: title
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/4eb8f7c1-96da-450e-9f57-35fc9b4cee24
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/4fae9363eff907d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e1c9ed42-9473-4149-b158-04607c87b5e7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e1c9ed42-9473-4149-b158-04607c87b5e7
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4eb8f7c1-96da-450e-9f57-35fc9b4cee24
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0039 | 1 | 6.9381 |
| 6.9376 | 0.0312 | 8 | 6.9346 |
| 6.9288 | 0.0625 | 16 | 6.9254 |
| 6.9164 | 0.0938 | 24 | 6.9197 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/XCoder-70B-GGUF | mradermacher | "2025-03-25T00:24:45Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:banksy235/XCoder-70B",
"base_model:quantized:banksy235/XCoder-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-24T18:21:01Z" | ---
base_model: banksy235/XCoder-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/banksy235/XCoder-70B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/XCoder-70B-GGUF/resolve/main/XCoder-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
neuralmagic/Llama-2-7b-ultrachat200k | neuralmagic | "2024-05-07T15:29:45Z" | 1,046 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chat",
"dataset:HuggingFaceH4/ultrachat_200k",
"arxiv:2405.03594",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1907.10641",
"arxiv:1911.01547",
"arxiv:2109.07958",
"arxiv:2107.03374",
"arxiv:2110.14168",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-15T15:45:04Z" | ---
base_model: meta-llama/Llama-2-7b-hf
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- HuggingFaceH4/ultrachat_200k
tags:
- chat
---
# Llama-2-7b-ultrachat
This repo contains a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) finetuned for chat tasks using the [UltraChat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
This model may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-ultrachat")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-ultrachat", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer.apply_chat_template(input_text, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | xxxx | xxxx |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | xxxx | xxxx |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx | xxxx |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | xxxx | xxxx |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | xxxx | xxxx |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | xxxx | xxxx |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
## Model Training Details
Coming soon.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |
Grayx/fiufiu_455 | Grayx | "2025-01-08T22:25:48Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-08T20:03:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
awonga/othello-gpt-600k-sae-blocks.0.hook_resid_pre-blocks.0.ln1.hook_normalized | awonga | "2025-04-11T18:15:22Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-04-11T13:55:58Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Liao3161/Llama-3.1-8B-4bit-wenyenwen | Liao3161 | "2024-12-17T03:26:11Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-17T02:58:41Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Liao3161
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MadFritz/ppo-SnowballTarget | MadFritz | "2023-12-10T10:12:37Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-12-10T10:12:32Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MadFritz/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_tfidf | ThuyNT03 | "2023-08-31T19:00:19Z" | 90 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-31T18:44:42Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_tfidf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_tfidf
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9859
- Accuracy: 0.76
- F1: 0.7539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0759 | 1.0 | 88 | 1.0112 | 0.53 | 0.4322 |
| 0.8863 | 2.0 | 176 | 0.8349 | 0.62 | 0.5864 |
| 0.7123 | 3.0 | 264 | 0.6804 | 0.72 | 0.7135 |
| 0.5451 | 4.0 | 352 | 0.7164 | 0.72 | 0.7144 |
| 0.4055 | 5.0 | 440 | 0.8908 | 0.74 | 0.7354 |
| 0.2911 | 6.0 | 528 | 0.9136 | 0.74 | 0.7323 |
| 0.2047 | 7.0 | 616 | 0.9533 | 0.74 | 0.7323 |
| 0.1831 | 8.0 | 704 | 0.9859 | 0.76 | 0.7539 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
GV91/sd-class-VG-butterflies-32 | GV91 | "2023-10-28T15:41:12Z" | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2023-10-28T15:40:17Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('GV91/sd-class-VG-butterflies-32')
image = pipeline().images[0]
image
```
|
meowewww/wav2vec2-base-timit-demo-check32dataset | meowewww | "2024-06-04T06:39:57Z" | 169 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:kresnik/wav2vec2-large-xlsr-korean",
"base_model:finetune:kresnik/wav2vec2-large-xlsr-korean",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-04T06:37:15Z" | ---
license: apache-2.0
base_model: kresnik/wav2vec2-large-xlsr-korean
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-base-timit-demo-check32dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-check32dataset
This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9876
- Wer: 0.8095
- Cer: 0.4179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.5272 | 5.0 | 5 | 2.3416 | 0.8333 | 0.4030 |
| 2.5892 | 10.0 | 10 | 2.2228 | 0.8333 | 0.3955 |
| 1.5311 | 15.0 | 15 | 2.1193 | 0.8095 | 0.4030 |
| 0.964 | 20.0 | 20 | 2.0500 | 0.7857 | 0.4179 |
| 0.8757 | 25.0 | 25 | 2.0115 | 0.7619 | 0.4104 |
| 0.7654 | 30.0 | 30 | 1.9876 | 0.8095 | 0.4179 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cpu
- Datasets 2.19.2
- Tokenizers 0.19.1
|
thdai2000/penneparsing-sft | thdai2000 | "2024-06-01T09:11:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mobiuslabsgmbh/aanaphi2-v0.1",
"base_model:adapter:mobiuslabsgmbh/aanaphi2-v0.1",
"region:us"
] | null | "2024-06-01T08:33:26Z" | ---
library_name: peft
base_model: mobiuslabsgmbh/aanaphi2-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
nblinh/9ae3e232-be59-4aac-938c-022da4b3804d | nblinh | "2025-02-01T04:45:19Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-01T04:25:16Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ae3e232-be59-4aac-938c-022da4b3804d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8391d58e45127793_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8391d58e45127793_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/9ae3e232-be59-4aac-938c-022da4b3804d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8391d58e45127793_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d2c0770-ec7d-41e1-a607-5367ecb0940b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d2c0770-ec7d-41e1-a607-5367ecb0940b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9ae3e232-be59-4aac-938c-022da4b3804d
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.0944 | 0.3404 | 200 | 1.9470 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jekunz/smollm135-da05-is1-no05-sv05-ties | jekunz | "2025-04-07T07:58:26Z" | 0 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"jekunz/smollm-135m-cpt-fineweb-icelandic",
"jekunz/smollm-135m-cpt-fineweb-swedish",
"jekunz/smollm-135m-cpt-fineweb-danish",
"jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:jekunz/smollm-135m-cpt-fineweb-danish",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-danish",
"base_model:jekunz/smollm-135m-cpt-fineweb-icelandic",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-icelandic",
"base_model:jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-norwegian-bokmaal",
"base_model:jekunz/smollm-135m-cpt-fineweb-swedish",
"base_model:merge:jekunz/smollm-135m-cpt-fineweb-swedish",
"region:us"
] | null | "2025-04-07T07:58:15Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
davidschulte/ESM_maveriq__bigbenchhard_tracking_shuffled_objects_five_objects | davidschulte | "2025-03-26T14:39:27Z" | 18 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:maveriq/bigbenchhard",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-05T17:15:24Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- maveriq/bigbenchhard
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM maveriq/bigbenchhard
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** maveriq/bigbenchhard
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** maveriq/bigbenchhard
- **Subset [optional]:** tracking_shuffled_objects_five_objects
- **Text Column:** input
- **Label Column:** target
- **Dataset Split:** train
- **Sample size [optional]:** 250
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run3_AugV5_k18_task1_organization | MayBashendy | "2025-01-16T00:00:32Z" | 184 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-31T21:02:22Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run3_AugV5_k18_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run3_AugV5_k18_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7471
- Qwk: 0.7023
- Mse: 0.7471
- Rmse: 0.8643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0235 | 2 | 7.0314 | 0.0118 | 7.0314 | 2.6517 |
| No log | 0.0471 | 4 | 4.7394 | 0.0981 | 4.7394 | 2.1770 |
| No log | 0.0706 | 6 | 3.3885 | 0.0222 | 3.3885 | 1.8408 |
| No log | 0.0941 | 8 | 2.9468 | 0.0654 | 2.9468 | 1.7166 |
| No log | 0.1176 | 10 | 3.8600 | -0.0524 | 3.8600 | 1.9647 |
| No log | 0.1412 | 12 | 3.6822 | 0.0105 | 3.6822 | 1.9189 |
| No log | 0.1647 | 14 | 2.2405 | 0.1630 | 2.2405 | 1.4968 |
| No log | 0.1882 | 16 | 1.8078 | 0.3130 | 1.8078 | 1.3446 |
| No log | 0.2118 | 18 | 1.9225 | 0.2182 | 1.9225 | 1.3865 |
| No log | 0.2353 | 20 | 1.9842 | 0.1964 | 1.9842 | 1.4086 |
| No log | 0.2588 | 22 | 2.0685 | 0.1626 | 2.0685 | 1.4382 |
| No log | 0.2824 | 24 | 2.0098 | 0.1789 | 2.0098 | 1.4177 |
| No log | 0.3059 | 26 | 2.0630 | 0.1562 | 2.0630 | 1.4363 |
| No log | 0.3294 | 28 | 1.8378 | 0.3089 | 1.8378 | 1.3556 |
| No log | 0.3529 | 30 | 1.7322 | 0.2314 | 1.7322 | 1.3161 |
| No log | 0.3765 | 32 | 1.6671 | 0.2857 | 1.6671 | 1.2912 |
| No log | 0.4 | 34 | 1.7067 | 0.3810 | 1.7067 | 1.3064 |
| No log | 0.4235 | 36 | 2.1197 | 0.2464 | 2.1197 | 1.4559 |
| No log | 0.4471 | 38 | 2.0824 | 0.2464 | 2.0824 | 1.4430 |
| No log | 0.4706 | 40 | 1.6853 | 0.3465 | 1.6853 | 1.2982 |
| No log | 0.4941 | 42 | 1.4002 | 0.3220 | 1.4002 | 1.1833 |
| No log | 0.5176 | 44 | 1.3654 | 0.2807 | 1.3654 | 1.1685 |
| No log | 0.5412 | 46 | 1.3367 | 0.3036 | 1.3367 | 1.1562 |
| No log | 0.5647 | 48 | 1.3295 | 0.3103 | 1.3295 | 1.1531 |
| No log | 0.5882 | 50 | 1.3482 | 0.3833 | 1.3482 | 1.1611 |
| No log | 0.6118 | 52 | 1.3523 | 0.4132 | 1.3523 | 1.1629 |
| No log | 0.6353 | 54 | 1.3706 | 0.4228 | 1.3706 | 1.1707 |
| No log | 0.6588 | 56 | 1.6018 | 0.4 | 1.6018 | 1.2656 |
| No log | 0.6824 | 58 | 1.7317 | 0.3284 | 1.7317 | 1.3159 |
| No log | 0.7059 | 60 | 1.6287 | 0.3759 | 1.6287 | 1.2762 |
| No log | 0.7294 | 62 | 1.3157 | 0.4769 | 1.3157 | 1.1470 |
| No log | 0.7529 | 64 | 1.2552 | 0.4754 | 1.2552 | 1.1204 |
| No log | 0.7765 | 66 | 1.3391 | 0.4202 | 1.3391 | 1.1572 |
| No log | 0.8 | 68 | 1.1898 | 0.4628 | 1.1898 | 1.0908 |
| No log | 0.8235 | 70 | 1.1369 | 0.5440 | 1.1369 | 1.0663 |
| No log | 0.8471 | 72 | 1.4889 | 0.4427 | 1.4889 | 1.2202 |
| No log | 0.8706 | 74 | 2.1408 | 0.2083 | 2.1408 | 1.4631 |
| No log | 0.8941 | 76 | 2.2344 | 0.2162 | 2.2344 | 1.4948 |
| No log | 0.9176 | 78 | 1.9152 | 0.3143 | 1.9152 | 1.3839 |
| No log | 0.9412 | 80 | 1.5257 | 0.4341 | 1.5257 | 1.2352 |
| No log | 0.9647 | 82 | 1.2594 | 0.4961 | 1.2594 | 1.1222 |
| No log | 0.9882 | 84 | 1.2755 | 0.5588 | 1.2755 | 1.1294 |
| No log | 1.0118 | 86 | 1.1837 | 0.5547 | 1.1837 | 1.0880 |
| No log | 1.0353 | 88 | 1.0343 | 0.6061 | 1.0343 | 1.0170 |
| No log | 1.0588 | 90 | 1.0378 | 0.6061 | 1.0378 | 1.0187 |
| No log | 1.0824 | 92 | 1.0150 | 0.5846 | 1.0150 | 1.0075 |
| No log | 1.1059 | 94 | 1.0470 | 0.5512 | 1.0470 | 1.0232 |
| No log | 1.1294 | 96 | 1.1442 | 0.5366 | 1.1442 | 1.0697 |
| No log | 1.1529 | 98 | 1.0102 | 0.5714 | 1.0102 | 1.0051 |
| No log | 1.1765 | 100 | 1.0180 | 0.6094 | 1.0180 | 1.0089 |
| No log | 1.2 | 102 | 0.9538 | 0.6202 | 0.9538 | 0.9766 |
| No log | 1.2235 | 104 | 0.9673 | 0.6357 | 0.9673 | 0.9835 |
| No log | 1.2471 | 106 | 0.8915 | 0.6718 | 0.8915 | 0.9442 |
| No log | 1.2706 | 108 | 0.8403 | 0.5469 | 0.8403 | 0.9167 |
| No log | 1.2941 | 110 | 0.8982 | 0.5926 | 0.8982 | 0.9477 |
| No log | 1.3176 | 112 | 0.8542 | 0.6277 | 0.8542 | 0.9242 |
| No log | 1.3412 | 114 | 0.8185 | 0.6567 | 0.8185 | 0.9047 |
| No log | 1.3647 | 116 | 0.8896 | 0.6718 | 0.8896 | 0.9432 |
| No log | 1.3882 | 118 | 0.8515 | 0.6667 | 0.8515 | 0.9228 |
| No log | 1.4118 | 120 | 0.8098 | 0.6377 | 0.8098 | 0.8999 |
| No log | 1.4353 | 122 | 1.0265 | 0.5986 | 1.0265 | 1.0132 |
| No log | 1.4588 | 124 | 1.1077 | 0.5772 | 1.1077 | 1.0525 |
| No log | 1.4824 | 126 | 0.9286 | 0.6087 | 0.9286 | 0.9636 |
| No log | 1.5059 | 128 | 0.9049 | 0.6519 | 0.9049 | 0.9513 |
| No log | 1.5294 | 130 | 1.1073 | 0.5692 | 1.1073 | 1.0523 |
| No log | 1.5529 | 132 | 1.3109 | 0.48 | 1.3109 | 1.1449 |
| No log | 1.5765 | 134 | 1.1888 | 0.5385 | 1.1888 | 1.0903 |
| No log | 1.6 | 136 | 1.1014 | 0.5507 | 1.1014 | 1.0495 |
| No log | 1.6235 | 138 | 1.2430 | 0.6144 | 1.2430 | 1.1149 |
| No log | 1.6471 | 140 | 1.1545 | 0.6144 | 1.1545 | 1.0745 |
| No log | 1.6706 | 142 | 0.8550 | 0.6803 | 0.8550 | 0.9247 |
| No log | 1.6941 | 144 | 0.7373 | 0.6993 | 0.7373 | 0.8587 |
| No log | 1.7176 | 146 | 0.6543 | 0.7183 | 0.6543 | 0.8089 |
| No log | 1.7412 | 148 | 0.6428 | 0.7682 | 0.6428 | 0.8017 |
| No log | 1.7647 | 150 | 0.6727 | 0.7733 | 0.6727 | 0.8202 |
| No log | 1.7882 | 152 | 0.8583 | 0.6345 | 0.8583 | 0.9265 |
| No log | 1.8118 | 154 | 0.8861 | 0.6389 | 0.8861 | 0.9413 |
| No log | 1.8353 | 156 | 0.7643 | 0.6269 | 0.7643 | 0.8743 |
| No log | 1.8588 | 158 | 0.7468 | 0.6912 | 0.7468 | 0.8642 |
| No log | 1.8824 | 160 | 0.7939 | 0.7111 | 0.7939 | 0.8910 |
| No log | 1.9059 | 162 | 0.7612 | 0.7059 | 0.7612 | 0.8724 |
| No log | 1.9294 | 164 | 0.7668 | 0.7059 | 0.7668 | 0.8757 |
| No log | 1.9529 | 166 | 0.7634 | 0.7059 | 0.7634 | 0.8737 |
| No log | 1.9765 | 168 | 0.7709 | 0.6963 | 0.7709 | 0.8780 |
| No log | 2.0 | 170 | 0.7588 | 0.7015 | 0.7588 | 0.8711 |
| No log | 2.0235 | 172 | 0.7491 | 0.6917 | 0.7491 | 0.8655 |
| No log | 2.0471 | 174 | 0.8015 | 0.6294 | 0.8015 | 0.8953 |
| No log | 2.0706 | 176 | 0.8458 | 0.6154 | 0.8458 | 0.9197 |
| No log | 2.0941 | 178 | 0.7547 | 0.7285 | 0.7547 | 0.8687 |
| No log | 2.1176 | 180 | 0.6971 | 0.7308 | 0.6971 | 0.8349 |
| No log | 2.1412 | 182 | 0.7088 | 0.7468 | 0.7088 | 0.8419 |
| No log | 2.1647 | 184 | 0.7490 | 0.7407 | 0.7490 | 0.8655 |
| No log | 2.1882 | 186 | 0.6306 | 0.7826 | 0.6306 | 0.7941 |
| No log | 2.2118 | 188 | 0.5544 | 0.7843 | 0.5544 | 0.7446 |
| No log | 2.2353 | 190 | 0.5466 | 0.8 | 0.5466 | 0.7393 |
| No log | 2.2588 | 192 | 0.5778 | 0.7943 | 0.5778 | 0.7602 |
| No log | 2.2824 | 194 | 0.6251 | 0.8 | 0.6251 | 0.7906 |
| No log | 2.3059 | 196 | 0.6102 | 0.7714 | 0.6102 | 0.7811 |
| No log | 2.3294 | 198 | 0.6840 | 0.7429 | 0.6840 | 0.8271 |
| No log | 2.3529 | 200 | 0.7767 | 0.6618 | 0.7767 | 0.8813 |
| No log | 2.3765 | 202 | 0.8260 | 0.6519 | 0.8260 | 0.9088 |
| No log | 2.4 | 204 | 0.8370 | 0.5846 | 0.8370 | 0.9149 |
| No log | 2.4235 | 206 | 0.9404 | 0.6260 | 0.9404 | 0.9698 |
| No log | 2.4471 | 208 | 1.0683 | 0.6074 | 1.0683 | 1.0336 |
| No log | 2.4706 | 210 | 1.0441 | 0.6331 | 1.0441 | 1.0218 |
| No log | 2.4941 | 212 | 0.7713 | 0.6620 | 0.7713 | 0.8783 |
| No log | 2.5176 | 214 | 0.6549 | 0.7338 | 0.6549 | 0.8092 |
| No log | 2.5412 | 216 | 0.7737 | 0.7246 | 0.7737 | 0.8796 |
| No log | 2.5647 | 218 | 0.7439 | 0.7445 | 0.7439 | 0.8625 |
| No log | 2.5882 | 220 | 0.6816 | 0.7714 | 0.6816 | 0.8256 |
| No log | 2.6118 | 222 | 0.6660 | 0.7606 | 0.6660 | 0.8161 |
| No log | 2.6353 | 224 | 0.6421 | 0.7606 | 0.6421 | 0.8013 |
| No log | 2.6588 | 226 | 0.6299 | 0.7571 | 0.6299 | 0.7937 |
| No log | 2.6824 | 228 | 0.6383 | 0.7737 | 0.6383 | 0.7989 |
| No log | 2.7059 | 230 | 0.6518 | 0.7626 | 0.6518 | 0.8073 |
| No log | 2.7294 | 232 | 0.5954 | 0.7919 | 0.5954 | 0.7716 |
| No log | 2.7529 | 234 | 0.6308 | 0.75 | 0.6308 | 0.7942 |
| No log | 2.7765 | 236 | 0.7196 | 0.7362 | 0.7196 | 0.8483 |
| No log | 2.8 | 238 | 0.6472 | 0.7738 | 0.6472 | 0.8045 |
| No log | 2.8235 | 240 | 0.5434 | 0.8075 | 0.5434 | 0.7372 |
| No log | 2.8471 | 242 | 0.5521 | 0.7534 | 0.5521 | 0.7431 |
| No log | 2.8706 | 244 | 0.5689 | 0.7586 | 0.5689 | 0.7542 |
| No log | 2.8941 | 246 | 0.6134 | 0.7448 | 0.6134 | 0.7832 |
| No log | 2.9176 | 248 | 0.7637 | 0.6939 | 0.7637 | 0.8739 |
| No log | 2.9412 | 250 | 0.8266 | 0.6621 | 0.8266 | 0.9092 |
| No log | 2.9647 | 252 | 0.7750 | 0.6853 | 0.7750 | 0.8803 |
| No log | 2.9882 | 254 | 0.7081 | 0.7194 | 0.7081 | 0.8415 |
| No log | 3.0118 | 256 | 0.6658 | 0.7429 | 0.6658 | 0.8159 |
| No log | 3.0353 | 258 | 0.6627 | 0.7632 | 0.6627 | 0.8140 |
| No log | 3.0588 | 260 | 0.7861 | 0.7027 | 0.7861 | 0.8866 |
| No log | 3.0824 | 262 | 0.8693 | 0.6667 | 0.8693 | 0.9324 |
| No log | 3.1059 | 264 | 0.8447 | 0.6962 | 0.8447 | 0.9191 |
| No log | 3.1294 | 266 | 0.7547 | 0.76 | 0.7547 | 0.8688 |
| No log | 3.1529 | 268 | 0.7511 | 0.7763 | 0.7511 | 0.8667 |
| No log | 3.1765 | 270 | 0.7411 | 0.76 | 0.7411 | 0.8609 |
| No log | 3.2 | 272 | 0.7470 | 0.7338 | 0.7470 | 0.8643 |
| No log | 3.2235 | 274 | 0.7854 | 0.7482 | 0.7854 | 0.8862 |
| No log | 3.2471 | 276 | 0.7636 | 0.7101 | 0.7636 | 0.8739 |
| No log | 3.2706 | 278 | 0.7759 | 0.7050 | 0.7759 | 0.8808 |
| No log | 3.2941 | 280 | 0.7727 | 0.7273 | 0.7727 | 0.8790 |
| No log | 3.3176 | 282 | 0.7086 | 0.7324 | 0.7086 | 0.8418 |
| No log | 3.3412 | 284 | 0.6374 | 0.7518 | 0.6374 | 0.7984 |
| No log | 3.3647 | 286 | 0.6178 | 0.7639 | 0.6178 | 0.7860 |
| No log | 3.3882 | 288 | 0.6270 | 0.7724 | 0.6270 | 0.7918 |
| No log | 3.4118 | 290 | 0.6655 | 0.7518 | 0.6655 | 0.8158 |
| No log | 3.4353 | 292 | 0.7305 | 0.7286 | 0.7305 | 0.8547 |
| No log | 3.4588 | 294 | 0.7508 | 0.7101 | 0.7508 | 0.8665 |
| No log | 3.4824 | 296 | 0.7965 | 0.6812 | 0.7965 | 0.8925 |
| No log | 3.5059 | 298 | 0.8511 | 0.6571 | 0.8511 | 0.9226 |
| No log | 3.5294 | 300 | 0.7663 | 0.7 | 0.7663 | 0.8754 |
| No log | 3.5529 | 302 | 0.6312 | 0.7571 | 0.6312 | 0.7945 |
| No log | 3.5765 | 304 | 0.6220 | 0.7536 | 0.6220 | 0.7887 |
| No log | 3.6 | 306 | 0.6584 | 0.7259 | 0.6584 | 0.8114 |
| No log | 3.6235 | 308 | 0.6772 | 0.7121 | 0.6772 | 0.8229 |
| No log | 3.6471 | 310 | 0.6926 | 0.7391 | 0.6926 | 0.8322 |
| No log | 3.6706 | 312 | 0.7617 | 0.7015 | 0.7617 | 0.8727 |
| No log | 3.6941 | 314 | 0.8401 | 0.6718 | 0.8401 | 0.9166 |
| No log | 3.7176 | 316 | 0.8706 | 0.6364 | 0.8706 | 0.9330 |
| No log | 3.7412 | 318 | 0.8456 | 0.6260 | 0.8456 | 0.9196 |
| No log | 3.7647 | 320 | 0.7858 | 0.7324 | 0.7858 | 0.8864 |
| No log | 3.7882 | 322 | 0.7151 | 0.7246 | 0.7151 | 0.8457 |
| No log | 3.8118 | 324 | 0.7409 | 0.7023 | 0.7409 | 0.8608 |
| No log | 3.8353 | 326 | 0.7238 | 0.6970 | 0.7238 | 0.8508 |
| No log | 3.8588 | 328 | 0.7246 | 0.6870 | 0.7246 | 0.8512 |
| No log | 3.8824 | 330 | 0.7731 | 0.6923 | 0.7731 | 0.8793 |
| No log | 3.9059 | 332 | 0.8096 | 0.6562 | 0.8096 | 0.8998 |
| No log | 3.9294 | 334 | 0.9239 | 0.5588 | 0.9239 | 0.9612 |
| No log | 3.9529 | 336 | 1.0388 | 0.5547 | 1.0388 | 1.0192 |
| No log | 3.9765 | 338 | 1.0096 | 0.5522 | 1.0096 | 1.0048 |
| No log | 4.0 | 340 | 0.8886 | 0.5760 | 0.8886 | 0.9427 |
| No log | 4.0235 | 342 | 0.8666 | 0.6299 | 0.8666 | 0.9309 |
| No log | 4.0471 | 344 | 0.8230 | 0.6406 | 0.8230 | 0.9072 |
| No log | 4.0706 | 346 | 0.7607 | 0.6667 | 0.7607 | 0.8722 |
| No log | 4.0941 | 348 | 0.7114 | 0.7445 | 0.7114 | 0.8434 |
| No log | 4.1176 | 350 | 0.6899 | 0.7591 | 0.6899 | 0.8306 |
| No log | 4.1412 | 352 | 0.6866 | 0.7153 | 0.6866 | 0.8286 |
| No log | 4.1647 | 354 | 0.7330 | 0.7042 | 0.7330 | 0.8562 |
| No log | 4.1882 | 356 | 0.7468 | 0.7007 | 0.7468 | 0.8642 |
| No log | 4.2118 | 358 | 0.7488 | 0.7164 | 0.7488 | 0.8653 |
| No log | 4.2353 | 360 | 0.7315 | 0.7218 | 0.7315 | 0.8553 |
| No log | 4.2588 | 362 | 0.6861 | 0.7153 | 0.6861 | 0.8283 |
| No log | 4.2824 | 364 | 0.6597 | 0.7429 | 0.6597 | 0.8122 |
| No log | 4.3059 | 366 | 0.7036 | 0.7260 | 0.7036 | 0.8388 |
| No log | 4.3294 | 368 | 0.7959 | 0.6525 | 0.7959 | 0.8921 |
| No log | 4.3529 | 370 | 0.7750 | 0.6857 | 0.7750 | 0.8803 |
| No log | 4.3765 | 372 | 0.7394 | 0.7391 | 0.7394 | 0.8599 |
| No log | 4.4 | 374 | 0.7253 | 0.7338 | 0.7253 | 0.8516 |
| No log | 4.4235 | 376 | 0.7183 | 0.7310 | 0.7183 | 0.8475 |
| No log | 4.4471 | 378 | 0.7231 | 0.7222 | 0.7231 | 0.8504 |
| No log | 4.4706 | 380 | 0.7008 | 0.7222 | 0.7008 | 0.8371 |
| No log | 4.4941 | 382 | 0.7029 | 0.7465 | 0.7029 | 0.8384 |
| No log | 4.5176 | 384 | 0.6484 | 0.7606 | 0.6484 | 0.8052 |
| No log | 4.5412 | 386 | 0.6424 | 0.7353 | 0.6424 | 0.8015 |
| No log | 4.5647 | 388 | 0.6551 | 0.7218 | 0.6551 | 0.8094 |
| No log | 4.5882 | 390 | 0.6690 | 0.7273 | 0.6690 | 0.8179 |
| No log | 4.6118 | 392 | 0.6714 | 0.7273 | 0.6714 | 0.8194 |
| No log | 4.6353 | 394 | 0.6710 | 0.7647 | 0.6710 | 0.8191 |
| No log | 4.6588 | 396 | 0.7000 | 0.7681 | 0.7000 | 0.8367 |
| No log | 4.6824 | 398 | 0.7323 | 0.7591 | 0.7323 | 0.8558 |
| No log | 4.7059 | 400 | 0.7124 | 0.7385 | 0.7124 | 0.8440 |
| No log | 4.7294 | 402 | 0.6877 | 0.7231 | 0.6877 | 0.8293 |
| No log | 4.7529 | 404 | 0.6501 | 0.75 | 0.6501 | 0.8063 |
| No log | 4.7765 | 406 | 0.6516 | 0.7626 | 0.6516 | 0.8072 |
| No log | 4.8 | 408 | 0.7483 | 0.7246 | 0.7483 | 0.8651 |
| No log | 4.8235 | 410 | 0.9247 | 0.5909 | 0.9247 | 0.9616 |
| No log | 4.8471 | 412 | 0.9672 | 0.5649 | 0.9672 | 0.9835 |
| No log | 4.8706 | 414 | 0.8486 | 0.5692 | 0.8486 | 0.9212 |
| No log | 4.8941 | 416 | 0.7573 | 0.7206 | 0.7573 | 0.8702 |
| No log | 4.9176 | 418 | 0.7473 | 0.7353 | 0.7473 | 0.8645 |
| No log | 4.9412 | 420 | 0.7321 | 0.7445 | 0.7321 | 0.8556 |
| No log | 4.9647 | 422 | 0.7282 | 0.7445 | 0.7282 | 0.8534 |
| No log | 4.9882 | 424 | 0.7301 | 0.7 | 0.7301 | 0.8545 |
| No log | 5.0118 | 426 | 0.7228 | 0.7083 | 0.7228 | 0.8502 |
| No log | 5.0353 | 428 | 0.6652 | 0.7571 | 0.6652 | 0.8156 |
| No log | 5.0588 | 430 | 0.6495 | 0.7445 | 0.6495 | 0.8059 |
| No log | 5.0824 | 432 | 0.6750 | 0.7445 | 0.6750 | 0.8216 |
| No log | 5.1059 | 434 | 0.7054 | 0.7353 | 0.7054 | 0.8399 |
| No log | 5.1294 | 436 | 0.7216 | 0.7482 | 0.7216 | 0.8494 |
| No log | 5.1529 | 438 | 0.7088 | 0.7391 | 0.7088 | 0.8419 |
| No log | 5.1765 | 440 | 0.7174 | 0.7536 | 0.7174 | 0.8470 |
| No log | 5.2 | 442 | 0.6744 | 0.7746 | 0.6744 | 0.8212 |
| No log | 5.2235 | 444 | 0.6361 | 0.7914 | 0.6361 | 0.7975 |
| No log | 5.2471 | 446 | 0.6384 | 0.7857 | 0.6384 | 0.7990 |
| No log | 5.2706 | 448 | 0.6634 | 0.75 | 0.6634 | 0.8145 |
| No log | 5.2941 | 450 | 0.7426 | 0.6711 | 0.7426 | 0.8618 |
| No log | 5.3176 | 452 | 0.8392 | 0.6395 | 0.8392 | 0.9161 |
| No log | 5.3412 | 454 | 0.7811 | 0.6531 | 0.7811 | 0.8838 |
| No log | 5.3647 | 456 | 0.7263 | 0.7368 | 0.7263 | 0.8522 |
| No log | 5.3882 | 458 | 0.6758 | 0.7347 | 0.6758 | 0.8221 |
| No log | 5.4118 | 460 | 0.6331 | 0.7887 | 0.6331 | 0.7956 |
| No log | 5.4353 | 462 | 0.6256 | 0.7794 | 0.6256 | 0.7910 |
| No log | 5.4588 | 464 | 0.6001 | 0.7883 | 0.6001 | 0.7747 |
| No log | 5.4824 | 466 | 0.5554 | 0.7887 | 0.5554 | 0.7453 |
| No log | 5.5059 | 468 | 0.5458 | 0.7862 | 0.5458 | 0.7388 |
| No log | 5.5294 | 470 | 0.5707 | 0.7943 | 0.5707 | 0.7555 |
| No log | 5.5529 | 472 | 0.6401 | 0.7801 | 0.6401 | 0.8001 |
| No log | 5.5765 | 474 | 0.7433 | 0.7164 | 0.7433 | 0.8622 |
| No log | 5.6 | 476 | 0.6860 | 0.7218 | 0.6860 | 0.8283 |
| No log | 5.6235 | 478 | 0.6419 | 0.75 | 0.6419 | 0.8012 |
| No log | 5.6471 | 480 | 0.6715 | 0.7338 | 0.6715 | 0.8194 |
| No log | 5.6706 | 482 | 0.7588 | 0.6119 | 0.7588 | 0.8711 |
| No log | 5.6941 | 484 | 0.7925 | 0.6119 | 0.7925 | 0.8902 |
| No log | 5.7176 | 486 | 0.7648 | 0.6515 | 0.7648 | 0.8745 |
| No log | 5.7412 | 488 | 0.7759 | 0.6614 | 0.7759 | 0.8809 |
| No log | 5.7647 | 490 | 0.8042 | 0.6349 | 0.8042 | 0.8968 |
| No log | 5.7882 | 492 | 0.7965 | 0.6614 | 0.7965 | 0.8925 |
| No log | 5.8118 | 494 | 0.8208 | 0.5873 | 0.8208 | 0.9060 |
| No log | 5.8353 | 496 | 1.0141 | 0.5714 | 1.0141 | 1.0070 |
| No log | 5.8588 | 498 | 1.1450 | 0.5755 | 1.1450 | 1.0700 |
| 0.4044 | 5.8824 | 500 | 1.1028 | 0.5693 | 1.1028 | 1.0501 |
| 0.4044 | 5.9059 | 502 | 0.9263 | 0.5692 | 0.9263 | 0.9624 |
| 0.4044 | 5.9294 | 504 | 0.7975 | 0.6667 | 0.7975 | 0.8930 |
| 0.4044 | 5.9529 | 506 | 0.7460 | 0.6970 | 0.7460 | 0.8637 |
| 0.4044 | 5.9765 | 508 | 0.7193 | 0.6970 | 0.7193 | 0.8481 |
| 0.4044 | 6.0 | 510 | 0.6773 | 0.6912 | 0.6773 | 0.8230 |
| 0.4044 | 6.0235 | 512 | 0.6524 | 0.7246 | 0.6524 | 0.8077 |
| 0.4044 | 6.0471 | 514 | 0.6800 | 0.7413 | 0.6800 | 0.8246 |
| 0.4044 | 6.0706 | 516 | 0.7152 | 0.7361 | 0.7152 | 0.8457 |
| 0.4044 | 6.0941 | 518 | 0.6992 | 0.7534 | 0.6992 | 0.8362 |
| 0.4044 | 6.1176 | 520 | 0.6368 | 0.8108 | 0.6368 | 0.7980 |
| 0.4044 | 6.1412 | 522 | 0.6208 | 0.7050 | 0.6208 | 0.7879 |
| 0.4044 | 6.1647 | 524 | 0.6287 | 0.7338 | 0.6287 | 0.7929 |
| 0.4044 | 6.1882 | 526 | 0.6505 | 0.7445 | 0.6505 | 0.8066 |
| 0.4044 | 6.2118 | 528 | 0.6889 | 0.7353 | 0.6889 | 0.8300 |
| 0.4044 | 6.2353 | 530 | 0.7252 | 0.7164 | 0.7252 | 0.8516 |
| 0.4044 | 6.2588 | 532 | 0.7400 | 0.7273 | 0.7400 | 0.8603 |
| 0.4044 | 6.2824 | 534 | 0.7593 | 0.7231 | 0.7593 | 0.8714 |
| 0.4044 | 6.3059 | 536 | 0.7806 | 0.6512 | 0.7806 | 0.8835 |
| 0.4044 | 6.3294 | 538 | 0.7719 | 0.6512 | 0.7719 | 0.8786 |
| 0.4044 | 6.3529 | 540 | 0.7145 | 0.7273 | 0.7145 | 0.8453 |
| 0.4044 | 6.3765 | 542 | 0.6716 | 0.7463 | 0.6716 | 0.8195 |
| 0.4044 | 6.4 | 544 | 0.6240 | 0.7660 | 0.6240 | 0.7899 |
| 0.4044 | 6.4235 | 546 | 0.6213 | 0.7733 | 0.6213 | 0.7882 |
| 0.4044 | 6.4471 | 548 | 0.6149 | 0.7682 | 0.6149 | 0.7842 |
| 0.4044 | 6.4706 | 550 | 0.5767 | 0.8026 | 0.5767 | 0.7594 |
| 0.4044 | 6.4941 | 552 | 0.5553 | 0.7867 | 0.5553 | 0.7452 |
| 0.4044 | 6.5176 | 554 | 0.5795 | 0.7536 | 0.5795 | 0.7612 |
| 0.4044 | 6.5412 | 556 | 0.6064 | 0.7591 | 0.6064 | 0.7787 |
| 0.4044 | 6.5647 | 558 | 0.6388 | 0.7647 | 0.6388 | 0.7993 |
| 0.4044 | 6.5882 | 560 | 0.6899 | 0.7391 | 0.6899 | 0.8306 |
| 0.4044 | 6.6118 | 562 | 0.7409 | 0.7015 | 0.7409 | 0.8608 |
| 0.4044 | 6.6353 | 564 | 0.7466 | 0.6870 | 0.7466 | 0.8641 |
| 0.4044 | 6.6588 | 566 | 0.7196 | 0.7068 | 0.7196 | 0.8483 |
| 0.4044 | 6.6824 | 568 | 0.6910 | 0.7164 | 0.6910 | 0.8313 |
| 0.4044 | 6.7059 | 570 | 0.7154 | 0.6718 | 0.7154 | 0.8458 |
| 0.4044 | 6.7294 | 572 | 0.7527 | 0.6357 | 0.7527 | 0.8676 |
| 0.4044 | 6.7529 | 574 | 0.7284 | 0.6870 | 0.7284 | 0.8535 |
| 0.4044 | 6.7765 | 576 | 0.7513 | 0.6870 | 0.7513 | 0.8668 |
| 0.4044 | 6.8 | 578 | 0.8152 | 0.6667 | 0.8152 | 0.9029 |
| 0.4044 | 6.8235 | 580 | 0.8381 | 0.6299 | 0.8381 | 0.9155 |
| 0.4044 | 6.8471 | 582 | 0.8717 | 0.6032 | 0.8717 | 0.9336 |
| 0.4044 | 6.8706 | 584 | 0.8429 | 0.5938 | 0.8429 | 0.9181 |
| 0.4044 | 6.8941 | 586 | 0.7695 | 0.6963 | 0.7695 | 0.8772 |
| 0.4044 | 6.9176 | 588 | 0.6751 | 0.7153 | 0.6751 | 0.8217 |
| 0.4044 | 6.9412 | 590 | 0.6328 | 0.7482 | 0.6328 | 0.7955 |
| 0.4044 | 6.9647 | 592 | 0.6285 | 0.7482 | 0.6285 | 0.7928 |
| 0.4044 | 6.9882 | 594 | 0.6450 | 0.7714 | 0.6450 | 0.8031 |
| 0.4044 | 7.0118 | 596 | 0.6854 | 0.7338 | 0.6854 | 0.8279 |
| 0.4044 | 7.0353 | 598 | 0.6775 | 0.7338 | 0.6775 | 0.8231 |
| 0.4044 | 7.0588 | 600 | 0.7018 | 0.7299 | 0.7018 | 0.8377 |
| 0.4044 | 7.0824 | 602 | 0.7298 | 0.7111 | 0.7298 | 0.8543 |
| 0.4044 | 7.1059 | 604 | 0.7536 | 0.7068 | 0.7536 | 0.8681 |
| 0.4044 | 7.1294 | 606 | 0.7881 | 0.7259 | 0.7881 | 0.8877 |
| 0.4044 | 7.1529 | 608 | 0.8275 | 0.6769 | 0.8275 | 0.9097 |
| 0.4044 | 7.1765 | 610 | 0.8161 | 0.6923 | 0.8161 | 0.9034 |
| 0.4044 | 7.2 | 612 | 0.7641 | 0.6917 | 0.7641 | 0.8741 |
| 0.4044 | 7.2235 | 614 | 0.7629 | 0.7111 | 0.7629 | 0.8734 |
| 0.4044 | 7.2471 | 616 | 0.7606 | 0.7059 | 0.7606 | 0.8721 |
| 0.4044 | 7.2706 | 618 | 0.8071 | 0.6667 | 0.8071 | 0.8984 |
| 0.4044 | 7.2941 | 620 | 0.8188 | 0.6667 | 0.8188 | 0.9049 |
| 0.4044 | 7.3176 | 622 | 0.7329 | 0.7206 | 0.7329 | 0.8561 |
| 0.4044 | 7.3412 | 624 | 0.6847 | 0.7445 | 0.6847 | 0.8275 |
| 0.4044 | 7.3647 | 626 | 0.6716 | 0.6963 | 0.6716 | 0.8195 |
| 0.4044 | 7.3882 | 628 | 0.6578 | 0.7299 | 0.6578 | 0.8110 |
| 0.4044 | 7.4118 | 630 | 0.6465 | 0.7801 | 0.6465 | 0.8040 |
| 0.4044 | 7.4353 | 632 | 0.6286 | 0.7801 | 0.6286 | 0.7929 |
| 0.4044 | 7.4588 | 634 | 0.6098 | 0.7801 | 0.6098 | 0.7809 |
| 0.4044 | 7.4824 | 636 | 0.6016 | 0.7746 | 0.6016 | 0.7756 |
| 0.4044 | 7.5059 | 638 | 0.5894 | 0.7626 | 0.5894 | 0.7677 |
| 0.4044 | 7.5294 | 640 | 0.6149 | 0.75 | 0.6149 | 0.7841 |
| 0.4044 | 7.5529 | 642 | 0.7125 | 0.6617 | 0.7125 | 0.8441 |
| 0.4044 | 7.5765 | 644 | 0.7692 | 0.6615 | 0.7692 | 0.8770 |
| 0.4044 | 7.6 | 646 | 0.7367 | 0.6565 | 0.7367 | 0.8583 |
| 0.4044 | 7.6235 | 648 | 0.6699 | 0.7407 | 0.6699 | 0.8185 |
| 0.4044 | 7.6471 | 650 | 0.6313 | 0.7391 | 0.6313 | 0.7945 |
| 0.4044 | 7.6706 | 652 | 0.6391 | 0.7651 | 0.6391 | 0.7994 |
| 0.4044 | 7.6941 | 654 | 0.6797 | 0.7397 | 0.6797 | 0.8245 |
| 0.4044 | 7.7176 | 656 | 0.7192 | 0.6993 | 0.7192 | 0.8480 |
| 0.4044 | 7.7412 | 658 | 0.7284 | 0.6993 | 0.7284 | 0.8535 |
| 0.4044 | 7.7647 | 660 | 0.7342 | 0.6765 | 0.7342 | 0.8569 |
| 0.4044 | 7.7882 | 662 | 0.7437 | 0.6815 | 0.7437 | 0.8624 |
| 0.4044 | 7.8118 | 664 | 0.8103 | 0.6418 | 0.8103 | 0.9002 |
| 0.4044 | 7.8353 | 666 | 0.9217 | 0.5455 | 0.9217 | 0.9601 |
| 0.4044 | 7.8588 | 668 | 0.9676 | 0.5672 | 0.9676 | 0.9837 |
| 0.4044 | 7.8824 | 670 | 0.8739 | 0.6074 | 0.8739 | 0.9348 |
| 0.4044 | 7.9059 | 672 | 0.7278 | 0.6912 | 0.7278 | 0.8531 |
| 0.4044 | 7.9294 | 674 | 0.6702 | 0.6818 | 0.6702 | 0.8187 |
| 0.4044 | 7.9529 | 676 | 0.7664 | 0.6769 | 0.7664 | 0.8755 |
| 0.4044 | 7.9765 | 678 | 0.8253 | 0.6462 | 0.8253 | 0.9084 |
| 0.4044 | 8.0 | 680 | 0.7950 | 0.6462 | 0.7950 | 0.8916 |
| 0.4044 | 8.0235 | 682 | 0.7046 | 0.7164 | 0.7046 | 0.8394 |
| 0.4044 | 8.0471 | 684 | 0.6580 | 0.7015 | 0.6580 | 0.8112 |
| 0.4044 | 8.0706 | 686 | 0.6576 | 0.7164 | 0.6576 | 0.8109 |
| 0.4044 | 8.0941 | 688 | 0.6681 | 0.7164 | 0.6681 | 0.8174 |
| 0.4044 | 8.1176 | 690 | 0.6734 | 0.7015 | 0.6734 | 0.8206 |
| 0.4044 | 8.1412 | 692 | 0.6882 | 0.6818 | 0.6882 | 0.8296 |
| 0.4044 | 8.1647 | 694 | 0.6963 | 0.7164 | 0.6963 | 0.8345 |
| 0.4044 | 8.1882 | 696 | 0.7216 | 0.7068 | 0.7216 | 0.8495 |
| 0.4044 | 8.2118 | 698 | 0.7675 | 0.6719 | 0.7675 | 0.8760 |
| 0.4044 | 8.2353 | 700 | 0.8272 | 0.6457 | 0.8272 | 0.9095 |
| 0.4044 | 8.2588 | 702 | 0.7659 | 0.6406 | 0.7659 | 0.8752 |
| 0.4044 | 8.2824 | 704 | 0.6860 | 0.7273 | 0.6860 | 0.8283 |
| 0.4044 | 8.3059 | 706 | 0.6532 | 0.6870 | 0.6532 | 0.8082 |
| 0.4044 | 8.3294 | 708 | 0.6825 | 0.6923 | 0.6825 | 0.8262 |
| 0.4044 | 8.3529 | 710 | 0.6742 | 0.6923 | 0.6742 | 0.8211 |
| 0.4044 | 8.3765 | 712 | 0.6693 | 0.6923 | 0.6693 | 0.8181 |
| 0.4044 | 8.4 | 714 | 0.6927 | 0.7218 | 0.6927 | 0.8323 |
| 0.4044 | 8.4235 | 716 | 0.7735 | 0.7015 | 0.7735 | 0.8795 |
| 0.4044 | 8.4471 | 718 | 0.8740 | 0.6308 | 0.8740 | 0.9349 |
| 0.4044 | 8.4706 | 720 | 0.9761 | 0.5512 | 0.9761 | 0.9880 |
| 0.4044 | 8.4941 | 722 | 0.9538 | 0.5512 | 0.9538 | 0.9766 |
| 0.4044 | 8.5176 | 724 | 0.8742 | 0.6357 | 0.8742 | 0.9350 |
| 0.4044 | 8.5412 | 726 | 0.7830 | 0.6718 | 0.7830 | 0.8848 |
| 0.4044 | 8.5647 | 728 | 0.7013 | 0.7218 | 0.7013 | 0.8374 |
| 0.4044 | 8.5882 | 730 | 0.6450 | 0.7626 | 0.6450 | 0.8031 |
| 0.4044 | 8.6118 | 732 | 0.5906 | 0.7832 | 0.5906 | 0.7685 |
| 0.4044 | 8.6353 | 734 | 0.5593 | 0.8 | 0.5593 | 0.7479 |
| 0.4044 | 8.6588 | 736 | 0.5481 | 0.7973 | 0.5481 | 0.7403 |
| 0.4044 | 8.6824 | 738 | 0.5465 | 0.8 | 0.5465 | 0.7393 |
| 0.4044 | 8.7059 | 740 | 0.5807 | 0.7887 | 0.5807 | 0.7621 |
| 0.4044 | 8.7294 | 742 | 0.6206 | 0.7591 | 0.6206 | 0.7878 |
| 0.4044 | 8.7529 | 744 | 0.6595 | 0.7647 | 0.6595 | 0.8121 |
| 0.4044 | 8.7765 | 746 | 0.6865 | 0.7313 | 0.6865 | 0.8286 |
| 0.4044 | 8.8 | 748 | 0.7435 | 0.7121 | 0.7435 | 0.8623 |
| 0.4044 | 8.8235 | 750 | 0.7834 | 0.6923 | 0.7834 | 0.8851 |
| 0.4044 | 8.8471 | 752 | 0.7963 | 0.7023 | 0.7963 | 0.8923 |
| 0.4044 | 8.8706 | 754 | 0.7706 | 0.6718 | 0.7706 | 0.8778 |
| 0.4044 | 8.8941 | 756 | 0.7179 | 0.7407 | 0.7179 | 0.8473 |
| 0.4044 | 8.9176 | 758 | 0.6657 | 0.7482 | 0.6657 | 0.8159 |
| 0.4044 | 8.9412 | 760 | 0.6467 | 0.7552 | 0.6467 | 0.8042 |
| 0.4044 | 8.9647 | 762 | 0.6290 | 0.7534 | 0.6290 | 0.7931 |
| 0.4044 | 8.9882 | 764 | 0.6032 | 0.7660 | 0.6032 | 0.7767 |
| 0.4044 | 9.0118 | 766 | 0.6250 | 0.7153 | 0.6250 | 0.7906 |
| 0.4044 | 9.0353 | 768 | 0.6696 | 0.7218 | 0.6696 | 0.8183 |
| 0.4044 | 9.0588 | 770 | 0.7180 | 0.7023 | 0.7180 | 0.8474 |
| 0.4044 | 9.0824 | 772 | 0.7541 | 0.6923 | 0.7541 | 0.8684 |
| 0.4044 | 9.1059 | 774 | 0.7661 | 0.6718 | 0.7661 | 0.8752 |
| 0.4044 | 9.1294 | 776 | 0.7655 | 0.6512 | 0.7655 | 0.8749 |
| 0.4044 | 9.1529 | 778 | 0.7226 | 0.6769 | 0.7226 | 0.8500 |
| 0.4044 | 9.1765 | 780 | 0.6673 | 0.7218 | 0.6673 | 0.8169 |
| 0.4044 | 9.2 | 782 | 0.6229 | 0.7313 | 0.6229 | 0.7892 |
| 0.4044 | 9.2235 | 784 | 0.5901 | 0.7883 | 0.5901 | 0.7682 |
| 0.4044 | 9.2471 | 786 | 0.5837 | 0.7556 | 0.5837 | 0.7640 |
| 0.4044 | 9.2706 | 788 | 0.6284 | 0.7068 | 0.6284 | 0.7927 |
| 0.4044 | 9.2941 | 790 | 0.6726 | 0.7015 | 0.6726 | 0.8201 |
| 0.4044 | 9.3176 | 792 | 0.6722 | 0.7015 | 0.6722 | 0.8199 |
| 0.4044 | 9.3412 | 794 | 0.6621 | 0.7556 | 0.6621 | 0.8137 |
| 0.4044 | 9.3647 | 796 | 0.6420 | 0.7883 | 0.6420 | 0.8013 |
| 0.4044 | 9.3882 | 798 | 0.6360 | 0.7746 | 0.6360 | 0.7975 |
| 0.4044 | 9.4118 | 800 | 0.6382 | 0.7746 | 0.6382 | 0.7989 |
| 0.4044 | 9.4353 | 802 | 0.6527 | 0.7714 | 0.6527 | 0.8079 |
| 0.4044 | 9.4588 | 804 | 0.6712 | 0.7591 | 0.6712 | 0.8193 |
| 0.4044 | 9.4824 | 806 | 0.6982 | 0.75 | 0.6982 | 0.8356 |
| 0.4044 | 9.5059 | 808 | 0.7054 | 0.7556 | 0.7054 | 0.8399 |
| 0.4044 | 9.5294 | 810 | 0.6889 | 0.7556 | 0.6889 | 0.8300 |
| 0.4044 | 9.5529 | 812 | 0.6498 | 0.7647 | 0.6498 | 0.8061 |
| 0.4044 | 9.5765 | 814 | 0.6324 | 0.7647 | 0.6324 | 0.7953 |
| 0.4044 | 9.6 | 816 | 0.6245 | 0.7407 | 0.6245 | 0.7902 |
| 0.4044 | 9.6235 | 818 | 0.6203 | 0.7164 | 0.6203 | 0.7876 |
| 0.4044 | 9.6471 | 820 | 0.6110 | 0.7826 | 0.6110 | 0.7817 |
| 0.4044 | 9.6706 | 822 | 0.6462 | 0.7482 | 0.6462 | 0.8039 |
| 0.4044 | 9.6941 | 824 | 0.7155 | 0.7299 | 0.7155 | 0.8459 |
| 0.4044 | 9.7176 | 826 | 0.7572 | 0.7218 | 0.7572 | 0.8702 |
| 0.4044 | 9.7412 | 828 | 0.7900 | 0.6462 | 0.7900 | 0.8888 |
| 0.4044 | 9.7647 | 830 | 0.8023 | 0.6512 | 0.8023 | 0.8957 |
| 0.4044 | 9.7882 | 832 | 0.7953 | 0.6512 | 0.7953 | 0.8918 |
| 0.4044 | 9.8118 | 834 | 0.7832 | 0.6769 | 0.7832 | 0.8850 |
| 0.4044 | 9.8353 | 836 | 0.7789 | 0.6769 | 0.7789 | 0.8826 |
| 0.4044 | 9.8588 | 838 | 0.7643 | 0.6718 | 0.7643 | 0.8742 |
| 0.4044 | 9.8824 | 840 | 0.7424 | 0.6870 | 0.7424 | 0.8616 |
| 0.4044 | 9.9059 | 842 | 0.7217 | 0.6870 | 0.7217 | 0.8495 |
| 0.4044 | 9.9294 | 844 | 0.7102 | 0.6870 | 0.7102 | 0.8427 |
| 0.4044 | 9.9529 | 846 | 0.6758 | 0.7164 | 0.6758 | 0.8221 |
| 0.4044 | 9.9765 | 848 | 0.6978 | 0.7259 | 0.6978 | 0.8354 |
| 0.4044 | 10.0 | 850 | 0.7008 | 0.7259 | 0.7008 | 0.8371 |
| 0.4044 | 10.0235 | 852 | 0.6894 | 0.75 | 0.6894 | 0.8303 |
| 0.4044 | 10.0471 | 854 | 0.6992 | 0.7313 | 0.6992 | 0.8362 |
| 0.4044 | 10.0706 | 856 | 0.7457 | 0.7068 | 0.7457 | 0.8636 |
| 0.4044 | 10.0941 | 858 | 0.7664 | 0.7313 | 0.7664 | 0.8754 |
| 0.4044 | 10.1176 | 860 | 0.7684 | 0.7121 | 0.7684 | 0.8766 |
| 0.4044 | 10.1412 | 862 | 0.7533 | 0.6923 | 0.7533 | 0.8679 |
| 0.4044 | 10.1647 | 864 | 0.7391 | 0.6923 | 0.7391 | 0.8597 |
| 0.4044 | 10.1882 | 866 | 0.7283 | 0.7077 | 0.7283 | 0.8534 |
| 0.4044 | 10.2118 | 868 | 0.7062 | 0.7328 | 0.7062 | 0.8403 |
| 0.4044 | 10.2353 | 870 | 0.6896 | 0.7273 | 0.6896 | 0.8304 |
| 0.4044 | 10.2588 | 872 | 0.6808 | 0.7218 | 0.6808 | 0.8251 |
| 0.4044 | 10.2824 | 874 | 0.6565 | 0.7218 | 0.6565 | 0.8103 |
| 0.4044 | 10.3059 | 876 | 0.6455 | 0.7218 | 0.6455 | 0.8034 |
| 0.4044 | 10.3294 | 878 | 0.6543 | 0.7218 | 0.6543 | 0.8089 |
| 0.4044 | 10.3529 | 880 | 0.6832 | 0.7218 | 0.6832 | 0.8266 |
| 0.4044 | 10.3765 | 882 | 0.6777 | 0.7164 | 0.6777 | 0.8232 |
| 0.4044 | 10.4 | 884 | 0.6626 | 0.7407 | 0.6626 | 0.8140 |
| 0.4044 | 10.4235 | 886 | 0.6505 | 0.7883 | 0.6505 | 0.8065 |
| 0.4044 | 10.4471 | 888 | 0.6580 | 0.7463 | 0.6580 | 0.8112 |
| 0.4044 | 10.4706 | 890 | 0.6475 | 0.7883 | 0.6475 | 0.8047 |
| 0.4044 | 10.4941 | 892 | 0.6370 | 0.7883 | 0.6370 | 0.7981 |
| 0.4044 | 10.5176 | 894 | 0.6223 | 0.7794 | 0.6223 | 0.7888 |
| 0.4044 | 10.5412 | 896 | 0.6000 | 0.7794 | 0.6000 | 0.7746 |
| 0.4044 | 10.5647 | 898 | 0.5912 | 0.7883 | 0.5912 | 0.7689 |
| 0.4044 | 10.5882 | 900 | 0.6463 | 0.7218 | 0.6463 | 0.8039 |
| 0.4044 | 10.6118 | 902 | 0.6738 | 0.7338 | 0.6738 | 0.8209 |
| 0.4044 | 10.6353 | 904 | 0.7280 | 0.7234 | 0.7280 | 0.8532 |
| 0.4044 | 10.6588 | 906 | 0.7610 | 0.7234 | 0.7610 | 0.8724 |
| 0.4044 | 10.6824 | 908 | 0.7296 | 0.7234 | 0.7296 | 0.8542 |
| 0.4044 | 10.7059 | 910 | 0.7030 | 0.7164 | 0.7030 | 0.8385 |
| 0.4044 | 10.7294 | 912 | 0.7065 | 0.6769 | 0.7065 | 0.8406 |
| 0.4044 | 10.7529 | 914 | 0.7293 | 0.7077 | 0.7293 | 0.8540 |
| 0.4044 | 10.7765 | 916 | 0.7145 | 0.7023 | 0.7145 | 0.8453 |
| 0.4044 | 10.8 | 918 | 0.6666 | 0.7218 | 0.6666 | 0.8165 |
| 0.4044 | 10.8235 | 920 | 0.6149 | 0.7556 | 0.6149 | 0.7841 |
| 0.4044 | 10.8471 | 922 | 0.5859 | 0.8 | 0.5859 | 0.7654 |
| 0.4044 | 10.8706 | 924 | 0.5818 | 0.8085 | 0.5818 | 0.7627 |
| 0.4044 | 10.8941 | 926 | 0.6140 | 0.8028 | 0.6140 | 0.7836 |
| 0.4044 | 10.9176 | 928 | 0.6353 | 0.7518 | 0.6353 | 0.7971 |
| 0.4044 | 10.9412 | 930 | 0.6433 | 0.7857 | 0.6433 | 0.8021 |
| 0.4044 | 10.9647 | 932 | 0.6653 | 0.7681 | 0.6653 | 0.8156 |
| 0.4044 | 10.9882 | 934 | 0.6968 | 0.7591 | 0.6968 | 0.8348 |
| 0.4044 | 11.0118 | 936 | 0.7139 | 0.7482 | 0.7139 | 0.8449 |
| 0.4044 | 11.0353 | 938 | 0.7132 | 0.7407 | 0.7132 | 0.8445 |
| 0.4044 | 11.0588 | 940 | 0.7088 | 0.7463 | 0.7088 | 0.8419 |
| 0.4044 | 11.0824 | 942 | 0.7357 | 0.7313 | 0.7357 | 0.8578 |
| 0.4044 | 11.1059 | 944 | 0.7521 | 0.7218 | 0.7521 | 0.8672 |
| 0.4044 | 11.1294 | 946 | 0.7406 | 0.6870 | 0.7406 | 0.8606 |
| 0.4044 | 11.1529 | 948 | 0.7224 | 0.7023 | 0.7224 | 0.8499 |
| 0.4044 | 11.1765 | 950 | 0.7302 | 0.7023 | 0.7302 | 0.8545 |
| 0.4044 | 11.2 | 952 | 0.7442 | 0.6923 | 0.7442 | 0.8627 |
| 0.4044 | 11.2235 | 954 | 0.7381 | 0.7121 | 0.7381 | 0.8591 |
| 0.4044 | 11.2471 | 956 | 0.7113 | 0.7206 | 0.7113 | 0.8434 |
| 0.4044 | 11.2706 | 958 | 0.6854 | 0.7153 | 0.6854 | 0.8279 |
| 0.4044 | 11.2941 | 960 | 0.6590 | 0.7482 | 0.6590 | 0.8118 |
| 0.4044 | 11.3176 | 962 | 0.6259 | 0.7626 | 0.6259 | 0.7912 |
| 0.4044 | 11.3412 | 964 | 0.6329 | 0.7887 | 0.6329 | 0.7955 |
| 0.4044 | 11.3647 | 966 | 0.6677 | 0.7660 | 0.6677 | 0.8171 |
| 0.4044 | 11.3882 | 968 | 0.7156 | 0.7338 | 0.7156 | 0.8459 |
| 0.4044 | 11.4118 | 970 | 0.7384 | 0.7660 | 0.7384 | 0.8593 |
| 0.4044 | 11.4353 | 972 | 0.7493 | 0.7092 | 0.7493 | 0.8656 |
| 0.4044 | 11.4588 | 974 | 0.7879 | 0.6806 | 0.7879 | 0.8877 |
| 0.4044 | 11.4824 | 976 | 0.8131 | 0.6897 | 0.8131 | 0.9017 |
| 0.4044 | 11.5059 | 978 | 0.7900 | 0.6763 | 0.7900 | 0.8888 |
| 0.4044 | 11.5294 | 980 | 0.7492 | 0.6866 | 0.7492 | 0.8655 |
| 0.4044 | 11.5529 | 982 | 0.7286 | 0.7218 | 0.7286 | 0.8536 |
| 0.4044 | 11.5765 | 984 | 0.7230 | 0.7111 | 0.7230 | 0.8503 |
| 0.4044 | 11.6 | 986 | 0.7179 | 0.7111 | 0.7179 | 0.8473 |
| 0.4044 | 11.6235 | 988 | 0.7259 | 0.7111 | 0.7259 | 0.8520 |
| 0.4044 | 11.6471 | 990 | 0.7186 | 0.7164 | 0.7186 | 0.8477 |
| 0.4044 | 11.6706 | 992 | 0.7062 | 0.6970 | 0.7062 | 0.8403 |
| 0.4044 | 11.6941 | 994 | 0.6983 | 0.7407 | 0.6983 | 0.8357 |
| 0.4044 | 11.7176 | 996 | 0.7026 | 0.7647 | 0.7026 | 0.8382 |
| 0.4044 | 11.7412 | 998 | 0.6680 | 0.7591 | 0.6680 | 0.8173 |
| 0.0787 | 11.7647 | 1000 | 0.6524 | 0.7536 | 0.6524 | 0.8077 |
| 0.0787 | 11.7882 | 1002 | 0.6338 | 0.7536 | 0.6338 | 0.7961 |
| 0.0787 | 11.8118 | 1004 | 0.6187 | 0.7704 | 0.6187 | 0.7866 |
| 0.0787 | 11.8353 | 1006 | 0.6318 | 0.7647 | 0.6318 | 0.7949 |
| 0.0787 | 11.8588 | 1008 | 0.6490 | 0.7407 | 0.6490 | 0.8056 |
| 0.0787 | 11.8824 | 1010 | 0.6641 | 0.7313 | 0.6641 | 0.8149 |
| 0.0787 | 11.9059 | 1012 | 0.6649 | 0.7647 | 0.6649 | 0.8154 |
| 0.0787 | 11.9294 | 1014 | 0.6744 | 0.7463 | 0.6744 | 0.8212 |
| 0.0787 | 11.9529 | 1016 | 0.6947 | 0.7463 | 0.6947 | 0.8335 |
| 0.0787 | 11.9765 | 1018 | 0.7282 | 0.7407 | 0.7282 | 0.8533 |
| 0.0787 | 12.0 | 1020 | 0.7417 | 0.7218 | 0.7417 | 0.8612 |
| 0.0787 | 12.0235 | 1022 | 0.7471 | 0.7023 | 0.7471 | 0.8643 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
fadelfitrah/python-codegen | fadelfitrah | "2024-10-25T15:30:48Z" | 128 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-25T15:30:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_GermanCredit_cfda_1ep_22 | MinaMila | "2025-03-31T21:54:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-31T21:51:06Z" | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Watc-h-Sophie-Rain-SpiderMan-video-updated/Sophie.Rain.SpiderMan.Video.clip | Watc-h-Sophie-Rain-SpiderMan-video-updated | "2025-03-22T15:53:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-22T15:53:47Z" |
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
03 seconds ago
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter Telegram
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter
. . . . . . . . . Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter Telegram
Lπaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Lπaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
davisrbr/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r16 | davisrbr | "2024-08-20T01:47:19Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:red_pajama-data-1_t-sample",
"base_model:ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16",
"base_model:adapter:ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16",
"region:us"
] | null | "2024-08-18T11:21:31Z" | ---
base_model: ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16
datasets:
- red_pajama-data-1_t-sample
library_name: peft
tags:
- generated_from_trainer
model-index:
- name: Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16-r16
This model is a fine-tuned version of [ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16) on the red_pajama-data-1_t-sample dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 50000
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.1
- Pytorch 2.3.1
- Datasets 2.19.0
- Tokenizers 0.19.1 |
NouRed/fine-tuned-vit-cifar10 | NouRed | "2023-11-06T15:36:42Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"dataset:cifar10",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-06T15:34:26Z" | ---
license: apache-2.0
datasets:
- cifar10
metrics:
- accuracy
- f1
- precision
- recall
library_name: transformers
pipeline_tag: image-classification
--- |
aleegis12/7d6ca3ac-824d-4359-8df9-28ab664fe5c6 | aleegis12 | "2025-02-05T09:01:37Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69",
"base_model:adapter:rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69",
"region:us"
] | null | "2025-02-05T07:26:02Z" | ---
library_name: peft
base_model: rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7d6ca3ac-824d-4359-8df9-28ab664fe5c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c7ee023794d7d85d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c7ee023794d7d85d_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/7d6ca3ac-824d-4359-8df9-28ab664fe5c6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c7ee023794d7d85d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eadd58e0-1e15-4859-bf02-0db212f00a46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eadd58e0-1e15-4859-bf02-0db212f00a46
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7d6ca3ac-824d-4359-8df9-28ab664fe5c6
This model is a fine-tuned version of [rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69](https://huggingface.co/rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4223 | 0.0002 | 1 | 0.0301 |
| 0.0016 | 0.0080 | 50 | 0.0524 |
| 0.0007 | 0.0160 | 100 | 0.0348 |
| 0.023 | 0.0239 | 150 | 0.0287 |
| 0.0083 | 0.0319 | 200 | 0.0288 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mannybernabe/train_mpt_7b | mannybernabe | "2024-01-31T09:53:01Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-31T09:52:51Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: train_mpt_7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mpt_7b
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LarryAIDraw/grapefruitHentaiModel_grapefruitv23 | LarryAIDraw | "2023-01-15T22:19:33Z" | 0 | 8 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-01-15T20:25:26Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/2583/grapefruit-hentai-model |
RichardErkhov/tvergho_-_txt2midi_musician-8bits | RichardErkhov | "2025-03-24T03:15:29Z" | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-24T03:10:02Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
txt2midi_musician - bnb 8bits
- Model creator: https://huggingface.co/tvergho/
- Original model: https://huggingface.co/tvergho/txt2midi_musician/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KasparZ/Mistral-7B-v0.1-hitl | KasparZ | "2024-03-30T15:32:57Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-26T22:19:12Z" | base_model = "mistralai/Mistral-7B-v0.1"
dataset_name = "KasparZ/HITL-2"
load_in_4bit= True,
bnb_4bit_quant_type= "nf4",
bnb_4bit_compute_dtype= torch.bfloat16,
bnb_4bit_use_double_quant= False,
device_map="auto",
trust_remote_code=True,
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj", "k_proj", "v_proj", "o_proj","gate_proj"]
model.gradient_checkpointing_enable()
tokenizer.padding_side = 'right'
tokenizer.add_eos_token = True
num_train_epochs=4,
per_device_train_batch_size=4,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=50,
logging_steps=1,
learning_rate=2e-4,
weight_decay=0.001,
fp16=False,
bf16=False,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant", |
rizla/rizla54 | rizla | "2024-02-02T02:19:21Z" | 47 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-02T01:55:09Z" | ---
base_model: []
tags:
- mergekit
- merge
license: cc-by-nc-2.0
---
# This is an experimental model that I made by merging two Llama2 70b models and gluing them together with the mergekit of llama70b. The mergekit is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
The merged model has 54 billion thingamajigs and was made trained on 640GB of vram cluster |
Nara-Lab/nallm-polyglot-ko-1.3b-base | Nara-Lab | "2023-06-28T09:24:15Z" | 2,272 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-22T01:12:03Z" | ---
license: mit
language:
- ko
---
NA-LLM(λλ¦)μ λλΌμ§μμ λ³΄κ° κ°λ°ν νκ΅μ΄ Large Language Model (LLM) μ
λλ€.
https://github.com/Nara-Information/NA-LLM |
RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-awq | RichardErkhov | "2024-11-20T17:22:20Z" | 5 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | "2024-11-20T17:21:17Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct-gsm8k - AWQ
- Model creator: https://huggingface.co/ank028/
- Original model: https://huggingface.co/ank028/Llama-3.2-1B-Instruct-gsm8k/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abenius/7668d6e9-118c-4fe7-845f-f98473632ba4 | abenius | "2025-02-08T21:18:41Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-08T20:04:46Z" | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7668d6e9-118c-4fe7-845f-f98473632ba4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 300eccd730f6015e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/300eccd730f6015e_train_data.json
type:
field_input: input_text
field_instruction: task
field_output: output_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/7668d6e9-118c-4fe7-845f-f98473632ba4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 2
mlflow_experiment_name: /tmp/300eccd730f6015e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 346403d6-43c0-4fe2-88c5-b550e62bfff1
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: 346403d6-43c0-4fe2-88c5-b550e62bfff1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 7668d6e9-118c-4fe7-845f-f98473632ba4
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3135 | 0.1013 | 600 | 0.6617 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Xu-Ouyang/pythia-160m-deduped-int2-step4-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-17T08:34:03Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-17T08:33:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/Phi-3.5-MoE-instruct-bf16 | mlx-community | "2024-08-24T11:18:46Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phimoe",
"text-generation",
"nlp",
"code",
"mlx",
"conversational",
"custom_code",
"multilingual",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-08-24T08:57:06Z" | ---
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-MoE-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- mlx
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# mlx-community/Phi-3.5-MoE-instruct-bf16
The Model [mlx-community/Phi-3.5-MoE-instruct-bf16](https://huggingface.co/mlx-community/Phi-3.5-MoE-instruct-bf16) was converted to MLX format from [microsoft/Phi-3.5-MoE-instruct](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) using mlx-lm version **0.17.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-3.5-MoE-instruct-bf16")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
IThinkUPC/autotrain-3_parts_car-58951133563 | IThinkUPC | "2023-05-16T14:55:03Z" | 182 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:IThinkUPC/autotrain-data-3_parts_car",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-16T14:50:52Z" | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- IThinkUPC/autotrain-data-3_parts_car
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.966800640027561
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 58951133563
- CO2 Emissions (in grams): 0.9668
## Validation Metrics
- Loss: 0.180
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000 |
JackCloudman/bityuno-zero-qwen2.5-3B-countdown | JackCloudman | "2025-01-25T17:25:41Z" | 15 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"tinyzero",
"r1",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-25T14:44:55Z" | ---
library_name: transformers
tags:
- tinyzero
- r1
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-3B
pipeline_tag: text-generation
---
# Bityuno Zero Qwen2.5-3B Countdown
**Bityuno Zero** is an implementation inspired by [TinyZero](https://github.com/Jiayi-Pan/TinyZero), designed to develop self-verification and search skills through reinforcement learning. This model is based on **Qwen2.5-3B** and has been specifically trained for the "Countdown" task, its so experimental, check the repo for more information!

|
sd-concepts-library/dragonborn | sd-concepts-library | "2022-09-12T20:22:04Z" | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | "2022-09-12T20:21:58Z" | ---
license: mit
---
### Dragonborn on Stable Diffusion
This is the `<dragonborn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
TemaBasoff/tomassas | TemaBasoff | "2023-12-28T21:59:50Z" | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-12-28T21:56:43Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Tomassas Dreambooth model trained by TemaBasoff with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
|
ramon/ppo-LunarLander-v2 | ramon | "2024-02-22T22:06:07Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-22T21:11:30Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 306.14 +/- 19.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gradientrouting-spar/test_proxy_trigger_coverage_0.2_seed_1_seed_2_20250328_102744 | gradientrouting-spar | "2025-03-28T10:28:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-28T10:28:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bristleback/bristleback-QcM8YtZ7u4 | bristleback | "2025-04-03T03:40:54Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-04-03T03:37:18Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
hkivancoral/smids_3x_deit_tiny_rms_0001_fold5 | hkivancoral | "2023-12-13T08:08:03Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-13T07:34:26Z" | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_deit_tiny_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8933333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9250
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4046 | 1.0 | 225 | 0.3353 | 0.855 |
| 0.2238 | 2.0 | 450 | 0.2977 | 0.8967 |
| 0.2029 | 3.0 | 675 | 0.3292 | 0.8717 |
| 0.1861 | 4.0 | 900 | 0.3918 | 0.8633 |
| 0.097 | 5.0 | 1125 | 0.5728 | 0.87 |
| 0.0763 | 6.0 | 1350 | 0.3602 | 0.8867 |
| 0.1211 | 7.0 | 1575 | 0.3953 | 0.9067 |
| 0.0628 | 8.0 | 1800 | 0.5619 | 0.8917 |
| 0.1484 | 9.0 | 2025 | 0.5750 | 0.88 |
| 0.0452 | 10.0 | 2250 | 0.6659 | 0.89 |
| 0.0229 | 11.0 | 2475 | 0.6256 | 0.8933 |
| 0.0617 | 12.0 | 2700 | 0.7075 | 0.87 |
| 0.0553 | 13.0 | 2925 | 0.6972 | 0.8983 |
| 0.0308 | 14.0 | 3150 | 0.6494 | 0.8983 |
| 0.0312 | 15.0 | 3375 | 0.6866 | 0.9 |
| 0.011 | 16.0 | 3600 | 0.7253 | 0.895 |
| 0.0983 | 17.0 | 3825 | 0.7035 | 0.8933 |
| 0.0451 | 18.0 | 4050 | 0.8265 | 0.8933 |
| 0.0418 | 19.0 | 4275 | 0.8696 | 0.8767 |
| 0.0469 | 20.0 | 4500 | 0.6273 | 0.9133 |
| 0.0203 | 21.0 | 4725 | 0.7939 | 0.895 |
| 0.0102 | 22.0 | 4950 | 0.7226 | 0.8967 |
| 0.0005 | 23.0 | 5175 | 0.8708 | 0.8933 |
| 0.0229 | 24.0 | 5400 | 0.9025 | 0.89 |
| 0.0344 | 25.0 | 5625 | 0.7685 | 0.9033 |
| 0.0016 | 26.0 | 5850 | 0.7805 | 0.9067 |
| 0.0048 | 27.0 | 6075 | 0.7684 | 0.9033 |
| 0.0028 | 28.0 | 6300 | 0.8595 | 0.8933 |
| 0.0098 | 29.0 | 6525 | 0.8847 | 0.8983 |
| 0.0002 | 30.0 | 6750 | 0.8488 | 0.8917 |
| 0.0 | 31.0 | 6975 | 0.9022 | 0.8883 |
| 0.0 | 32.0 | 7200 | 0.8024 | 0.895 |
| 0.0047 | 33.0 | 7425 | 0.8208 | 0.8933 |
| 0.0001 | 34.0 | 7650 | 0.9019 | 0.9017 |
| 0.0033 | 35.0 | 7875 | 0.8774 | 0.8883 |
| 0.0 | 36.0 | 8100 | 0.8642 | 0.885 |
| 0.0189 | 37.0 | 8325 | 0.8309 | 0.8983 |
| 0.0 | 38.0 | 8550 | 0.9322 | 0.89 |
| 0.0 | 39.0 | 8775 | 0.9453 | 0.8933 |
| 0.0 | 40.0 | 9000 | 0.9411 | 0.89 |
| 0.0 | 41.0 | 9225 | 0.9468 | 0.8917 |
| 0.0 | 42.0 | 9450 | 0.9584 | 0.8967 |
| 0.003 | 43.0 | 9675 | 0.9469 | 0.8917 |
| 0.0 | 44.0 | 9900 | 0.9339 | 0.8917 |
| 0.0 | 45.0 | 10125 | 0.9259 | 0.89 |
| 0.0 | 46.0 | 10350 | 0.9294 | 0.8917 |
| 0.0 | 47.0 | 10575 | 0.9214 | 0.8917 |
| 0.0 | 48.0 | 10800 | 0.9235 | 0.8917 |
| 0.0 | 49.0 | 11025 | 0.9243 | 0.8933 |
| 0.0 | 50.0 | 11250 | 0.9250 | 0.8933 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
TOMFORD79/OGANIC_O5 | TOMFORD79 | "2025-02-28T08:52:30Z" | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-28T08:35:52Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gokuls/bert-base-uncased-rte | gokuls | "2023-01-27T00:23:37Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-01-27T00:19:35Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6064981949458483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6540
- Accuracy: 0.6065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7009 | 1.0 | 20 | 0.6781 | 0.5560 |
| 0.6393 | 2.0 | 40 | 0.6540 | 0.6065 |
| 0.4606 | 3.0 | 60 | 0.7134 | 0.6498 |
| 0.2597 | 4.0 | 80 | 0.8379 | 0.6751 |
| 0.1492 | 5.0 | 100 | 1.3531 | 0.6282 |
| 0.0954 | 6.0 | 120 | 1.2220 | 0.6354 |
| 0.0561 | 7.0 | 140 | 1.2282 | 0.6715 |
| 0.0379 | 8.0 | 160 | 1.4368 | 0.6679 |
| 0.0368 | 9.0 | 180 | 1.8559 | 0.6498 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
worde-byte/GOATllama-v7 | worde-byte | "2025-03-12T17:40:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T17:37:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
muhtasham/tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-target-glue-qnli | muhtasham | "2023-01-13T06:07:27Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-01-13T05:58:54Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-target-glue-qnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qnli-from-scratch-custom-tokenizer-target-glue-qnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli-from-scratch-custom-tokenizer](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli-from-scratch-custom-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6605
- Accuracy: 0.6096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6927 | 0.15 | 500 | 0.6897 | 0.5435 |
| 0.6834 | 0.31 | 1000 | 0.6687 | 0.5883 |
| 0.6709 | 0.46 | 1500 | 0.6631 | 0.6035 |
| 0.6647 | 0.61 | 2000 | 0.6655 | 0.5971 |
| 0.662 | 0.76 | 2500 | 0.6550 | 0.6081 |
| 0.6615 | 0.92 | 3000 | 0.6542 | 0.6150 |
| 0.6511 | 1.07 | 3500 | 0.6699 | 0.6000 |
| 0.6471 | 1.22 | 4000 | 0.6620 | 0.6066 |
| 0.6411 | 1.37 | 4500 | 0.6605 | 0.6096 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
visdata/tum1 | visdata | "2025-02-12T17:18:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-12T17:12:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt/430db786-4848-49a6-a227-1cfee8865bf3 | nathanialhunt | "2025-01-24T23:36:26Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-01-24T23:35:39Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 430db786-4848-49a6-a227-1cfee8865bf3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9a31c4e8d7bc32cb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9a31c4e8d7bc32cb_train_data.json
type:
field_input: context
field_instruction: title
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/430db786-4848-49a6-a227-1cfee8865bf3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/9a31c4e8d7bc32cb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 02097914-0e30-48b1-b4f2-de4d7ed7768b
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 02097914-0e30-48b1-b4f2-de4d7ed7768b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 430db786-4848-49a6-a227-1cfee8865bf3
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.5819 | 0.0004 | 1 | 3.4605 |
| 14.9943 | 0.0012 | 3 | 3.4367 |
| 13.8399 | 0.0024 | 6 | 3.2900 |
| 12.1704 | 0.0036 | 9 | 3.0232 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
efieditor2/yinon | efieditor2 | "2025-03-09T12:15:20Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-03-09T11:35:59Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
gokulsrinivasagan/distilbert-base-uncased_qqp | gokulsrinivasagan | "2024-12-04T19:49:33Z" | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-04T19:07:05Z" | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.897180311649765
- name: F1
type: f1
value: 0.8657863300293804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_qqp
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2497
- Accuracy: 0.8972
- F1: 0.8658
- Combined Score: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3242 | 1.0 | 1422 | 0.2672 | 0.8833 | 0.8400 | 0.8616 |
| 0.2172 | 2.0 | 2844 | 0.2497 | 0.8972 | 0.8658 | 0.8815 |
| 0.1525 | 3.0 | 4266 | 0.2637 | 0.8982 | 0.8669 | 0.8826 |
| 0.1072 | 4.0 | 5688 | 0.2838 | 0.8999 | 0.8631 | 0.8815 |
| 0.0788 | 5.0 | 7110 | 0.3267 | 0.9021 | 0.8666 | 0.8844 |
| 0.0599 | 6.0 | 8532 | 0.3451 | 0.9018 | 0.8682 | 0.8850 |
| 0.048 | 7.0 | 9954 | 0.3699 | 0.9003 | 0.8650 | 0.8827 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
songhieng/khmer-mt5-summarization-1024tk | songhieng | "2025-02-21T14:42:00Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"km",
"dataset:kimleang123/rfi_news",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2025-02-13T05:57:30Z" | ---
license: apache-2.0
datasets:
- kimleang123/rfi_news
language:
- km
metrics:
- rouge
base_model:
- google/mt5-small
pipeline_tag: summarization
library_name: transformers
---
# Khmer mT5 Summarization Model (1024 Tokens)
## Introduction
This repository contains a fine-tuned mT5 model for Khmer text summarization, extending the capabilities of the original [khmer-mt5-summarization](https://huggingface.co/songhieng/khmer-mt5-summarization) model. The primary enhancement in this version is the support for summarizing longer texts, with training adjusted to accommodate inputs up to 1024 tokens.
## Model Details
- **Base Model:** `google/mt5-small`
- **Fine-tuned for:** Khmer text summarization with extended input length
- **Training Dataset:** `kimleang123/khmer-text-dataset`
- **Framework:** Hugging Face `transformers`
- **Task Type:** Sequence-to-Sequence (Seq2Seq)
- **Input:** Khmer text (articles, paragraphs, or documents) up to 1024 tokens
- **Output:** Summarized Khmer text
- **Training Hardware:** GPU (Tesla T4)
- **Evaluation Metric:** ROUGE Score
## Installation & Setup
### 1οΈβ£ Install Dependencies
Ensure you have `transformers`, `torch`, and `datasets` installed:
```bash
pip install transformers torch datasets
```
### 2οΈβ£ Load the Model
To load and use the fine-tuned model:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "songhieng/khmer-mt5-summarization-1024tk"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
## How to Use
### 1οΈβ£ Using Python Code
```python
def summarize_khmer(text, max_length=150):
input_text = f"summarize: {text}"
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=max_length, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
khmer_text = "ααααα»ααΆααΆααααααΆαααααααΆα α‘α¦ ααΆαααΆαα α αΎαααΆααΊααΆαααααααα
αααααα’αΆαααΈα’αΆααααααα"
summary = summarize_khmer(khmer_text)
print("Khmer Summary:", summary)
```
### 2οΈβ£ Using Hugging Face Pipeline
For a simpler approach:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="songhieng/khmer-mt5-summarization-1024tk")
khmer_text = "ααααα»ααΆααΆααααααΆαααααααΆα α‘α¦ ααΆαααΆαα α αΎαααΆααΊααΆαααααααα
αααααα’αΆαααΈα’αΆααααααα"
summary = summarizer(khmer_text, max_length=150, min_length=30, do_sample=False)
print("Khmer Summary:", summary[0]['summary_text'])
```
### 3οΈβ£ Deploy as an API using FastAPI
You can create a simple API for summarization:
```python
from fastapi import FastAPI
app = FastAPI()
@app.post("/summarize/")
def summarize(text: str):
inputs = tokenizer(f"summarize: {text}", return_tensors="pt", truncation=True, max_length=1024)
summary_ids = model.generate(**inputs, max_length=150, num_beams=5, length_penalty=2.0, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return {"summary": summary}
# Run with: uvicorn filename:app --reload
```
## Model Evaluation
The model was evaluated using **ROUGE scores**, which measure the similarity between the generated summaries and the reference summaries.
```python
from datasets import load_metric
rouge = load_metric("rouge")
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
decoded_preds = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
return rouge.compute(predictions=decoded_preds, references=decoded_labels)
trainer.evaluate()
```
## Saving & Uploading the Model
After fine-tuning, the model can be uploaded to the Hugging Face Hub:
```python
model.push_to_hub("songhieng/khmer-mt5-summarization-1024tk")
tokenizer.push_to_hub("songhieng/khmer-mt5-summarization-1024tk")
```
To download it later:
```python
model = AutoModelForSeq2SeqLM.from_pretrained("songhieng/khmer-mt5-summarization-1024tk")
tokenizer = AutoTokenizer.from_pretrained("songhieng/khmer-mt5-summarization-1024tk")
```
## Summary
| **Feature** | **Details** |
|-----------------------|-------------------------------------------------|
| **Base Model** | `google/mt5-small` |
| **Task** | Summarization |
| **Language** | Khmer (ααααα) |
| **Dataset** | `kimleang123/khmer-text-dataset` |
| **Framework** | Hugging Face Transformers |
| **Evaluation Metric** | ROUGE Score |
| **Deployment** | Hugging Face Model Hub, API (FastAPI), Python Code |
## Contributing
Contributions are welcome! Feel free to **open issues or submit pull requests** if you have any improvements or suggestions.
### Contact
If you have any questions, feel free to reach out via [Hugging Face Discussions](https://huggingface.co/) or create an issue in the repository.
**Built for the Khmer NLP Community** |
lesso/db65f200-d099-4cc0-ac80-147d68239539 | lesso | "2025-02-03T08:30:00Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | "2025-02-03T08:25:07Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db65f200-d099-4cc0-ac80-147d68239539
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9847e22f02cfb697_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9847e22f02cfb697_train_data.json
type:
field_instruction: meaning_representation
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/db65f200-d099-4cc0-ac80-147d68239539
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001018
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god18/9847e22f02cfb697_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb6baa0e-3a47-4757-a2bc-7b28afcb8d1e
wandb_project: ab-god18
wandb_run: your_name
wandb_runid: bb6baa0e-3a47-4757-a2bc-7b28afcb8d1e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# db65f200-d099-4cc0-ac80-147d68239539
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001018
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7198 | 0.0010 | 1 | 2.4989 |
| 1.1574 | 0.0487 | 50 | 1.1844 |
| 1.0025 | 0.0975 | 100 | 1.0272 |
| 1.2808 | 0.1462 | 150 | 0.9562 |
| 0.964 | 0.1949 | 200 | 0.9124 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sail-rvc/vibrivibribbon | sail-rvc | "2023-07-14T07:44:51Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:44:39Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# vibrivibribbon
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:44:51
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
ThomasTKB/Flux-Lora | ThomasTKB | "2025-02-17T16:19:26Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-17T16:19:19Z" | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: tkbisfelt
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Flux Lora
<Gallery />
## Model description
## Trigger words
You should use `tkbisfelt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ThomasTKB/Flux-Lora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
TheBloke/LlongOrca-13B-16K-GGUF | TheBloke | "2023-09-27T13:02:37Z" | 172 | 10 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"arxiv:2306.02707",
"arxiv:2301.13688",
"arxiv:2307.09288",
"base_model:Open-Orca/LlongOrca-13B-16k",
"base_model:quantized:Open-Orca/LlongOrca-13B-16k",
"license:llama2",
"region:us"
] | text-generation | "2023-09-05T19:49:29Z" | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- Open-Orca/OpenOrca
model_name: LlongOrca 13B 16K
inference: false
model_creator: Open-Orca
model_link: https://huggingface.co/Open-Orca/LlongOrca-13B-16k
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
base_model: Open-Orca/LlongOrca-13B-16k
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LlongOrca 13B 16K - GGUF
- Model creator: [Open-Orca](https://huggingface.co/Open-Orca)
- Original model: [LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Open-Orca's LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML)
* [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Open-Orca's LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llongorca-13b-16k.Q2_K.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llongorca-13b-16k.Q3_K_S.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llongorca-13b-16k.Q3_K_M.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llongorca-13b-16k.Q3_K_L.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llongorca-13b-16k.Q4_0.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llongorca-13b-16k.Q4_K_S.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llongorca-13b-16k.Q4_K_M.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llongorca-13b-16k.Q5_0.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llongorca-13b-16k.Q5_K_S.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llongorca-13b-16k.Q5_K_M.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llongorca-13b-16k.Q6_K.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llongorca-13b-16k.Q8_0.gguf](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF/blob/main/llongorca-13b-16k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llongorca-13b-16k.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LlongOrca-13B-16K-GGUF", model_file="llongorca-13b-16k.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper WikieΕ, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, ιΏζ, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik BjΓ€reholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Open-Orca's LlongOrca 13B 16K
<p><h1>π The Second Llong Context Orca! π</h1></p>

# OpenOrca - LlongOrca - 13B - 16k
We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k).
This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
HF Leaderboard evals place this model as #1 for all 13B long context models at release time.
We achieve >112% the performance of the base LLongMA2-13b-16k model we tuned on top of.
As well, we preserve >98% of the performance of the OpenOrcaxOpenChat-Preview2-13B model we share datasets with, while extending the context to 16k.
We did this training as part of testing setup of our H100 cluster.
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
Many thanks to @EnricoShippole, @theemozilla, and @kaiokendev1 for the fine work on creating the LlongMA-2-13b-16k model this was trained on top of!
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Prompt Template
We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this.
## Example Prompt Exchange
```
<|im_start|>system
You are LlongOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
How are you now?<|im_end|>
```
# Evaluation
We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base long context model.
We reach >112% of LLongMA2-13B-16k performance.
## HuggingFaceH4 Open LLM Leaderboard Performance
We have run our own tests using parameters matching the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) evals.
We preserve >98% of OpenOrcaxOpenChat-Preview2-13B performance and are #1 on the leaderboard for long context 13B models at release time.
We have >103% performance of the next 16k model (vicuna-13b-v1.5-16k).
As well, we expect the context extension techniques from LLongMA to be more robust than other 16k context models available.

## GPT4ALL Leaderboard Performance
We find we score higher than all non-OpenOrca models on the GPT4ALL leaderboard, while preserving ~98.7% of our OpenOrcaxOpenChat-Preview2-13B performance.

# Dataset
We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset.
Further details of our curation practices will be forthcoming with our full model releases.
# Training
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
We trained with 8x H100 GPUs for 10 hours, completing 4 epochs of full fine tuning on our dataset in one training run.
Commodity cost was ~$300.
# Citation
```bibtex
@software{dale2023llongorca13b,
title = {LlongOrca13B: Llama2-13B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Alpin Dale and Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/LlongOrca-7B-16k},
}
@software{openchat,
title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
doi = {10.5281/zenodo.8105775},
url = {https://github.com/imoneoi/openchat},
version = {pre-release},
year = {2023},
month = {7},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
<!-- original-model-card end -->
|
emilykang/Gemma_medQuad_finetuned_lora | emilykang | "2024-05-17T05:06:54Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-05-16T21:11:41Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_medQuad_finetuned_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_medQuad_finetuned_lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
GDinesh/llava-1-5 | GDinesh | "2024-03-21T15:00:50Z" | 0 | 0 | null | [
"arxiv:2310.03744",
"arxiv:2304.08485",
"arxiv:2311.05437",
"arxiv:2311.00571",
"arxiv:2306.00890",
"arxiv:2309.09958",
"arxiv:2309.10020",
"arxiv:2306.14895",
"arxiv:2112.05682",
"endpoints_compatible",
"region:us"
] | null | "2024-03-21T14:48:38Z" | # π LLaVA: Large Language and Vision Assistant
*Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
[π’ [LLaVA-NeXT Blog](https://llava-vl.github.io/blog/2024-01-30-llava-next/)] [[Project Page](https://llava-vl.github.io/)] [[Demo](https://llava.hliu.cc/)] [[Data](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)] [[Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)]
π€Community Contributions: [[llama.cpp](https://github.com/ggerganov/llama.cpp/pull/3436)] [[Colab](https://github.com/camenduru/LLaVA-colab)] [[π€Space](https://huggingface.co/spaces/badayvedat/LLaVA)] [[Replicate](https://replicate.com/yorickvp/llava-13b)] [[AutoGen](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb)] [[BakLLaVA](https://github.com/SkunkworksAI/BakLLaVA)]
**Improved Baselines with Visual Instruction Tuning** [[Paper](https://arxiv.org/abs/2310.03744)] [[HF](https://huggingface.co/papers/2310.03744)] <br>
[Haotian Liu](https://hliu.cc), [Chunyuan Li](https://chunyuan.li/), [Yuheng Li](https://yuheng-li.github.io/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)
**Visual Instruction Tuning** (NeurIPS 2023, **Oral**) [[Paper](https://arxiv.org/abs/2304.08485)] [[HF](https://huggingface.co/papers/2304.08485)] <br>
[Haotian Liu*](https://hliu.cc), [Chunyuan Li*](https://chunyuan.li/), [Qingyang Wu](https://scholar.google.ca/citations?user=HDiw-TsAAAAJ&hl=en/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/) (*Equal Contribution)
<!--p align="center">
<a href="https://llava.hliu.cc/"><img src="images/llava_logo.png" width="50%"></a> <br>
Generated by <a href="https://gligen.github.io/">GLIGEN</a> via "a cute lava llama with glasses" and box prompt
</p-->
## Release
- [03/10] Releasing **LMMs-Eval**, a highly efficient evaluation pipeline we used when developing LLaVA-NeXT. It supports the evaluation of LMMs on dozens of public datasets and allows new dataset onboarding, making the dev of new LMMs much faster. [[Blog](https://lmms-lab.github.io/lmms-eval-blog/lmms-eval-0.1/)] [[Codebase](https://github.com/EvolvingLMMs-Lab/lmms-eval)]
- [1/30] π₯ LLaVA-NeXT (LLaVA-1.6) is out! With additional scaling to LLaVA-1.5, LLaVA-NeXT-34B outperforms Gemini Pro on some benchmarks. It can now process 4x more pixels and perform more tasks/applications than before. Check out the [blog post](https://llava-vl.github.io/blog/2024-01-30-llava-next/), and explore the [demo](https://llava.hliu.cc/)! Models are available in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). Training/eval data and scripts coming soon.
- [11/10] [LLaVA-Plus](https://llava-vl.github.io/llava-plus/) is released: Learning to Use Tools for Creating Multimodal Agents, with LLaVA-Plus (LLaVA that Plug and Learn to Use Skills). [[Project Page](https://llava-vl.github.io/llava-plus/)] [[Demo](https://llavaplus.ngrok.io/)] [[Code](https://github.com/LLaVA-VL/LLaVA-Plus-Codebase)] [[Paper](https://arxiv.org/abs/2311.05437)]
- [11/2] [LLaVA-Interactive](https://llava-vl.github.io/llava-interactive/) is released: Experience the future of human-AI multimodal interaction with an all-in-one demo for Image Chat, Segmentation, Generation and Editing. [[Project Page](https://llava-vl.github.io/llava-interactive/)] [[Demo](https://llavainteractive.ngrok.io/)] [[Code](https://github.com/LLaVA-VL/LLaVA-Interactive-Demo)] [[Paper](https://arxiv.org/abs/2311.00571)]
- [10/26] π₯ LLaVA-1.5 with LoRA achieves comparable performance as full-model finetuning, with a reduced GPU RAM requirement ([ckpts](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md#llava-v15), [script](https://github.com/haotian-liu/LLaVA#train)). We also provide a [doc](https://github.com/haotian-liu/LLaVA/blob/main/docs/Finetune_Custom_Data.md) on how to finetune LLaVA-1.5 on your own dataset with LoRA.
- [10/12] Check out the Korean LLaVA (Ko-LLaVA), created by ETRI, who has generously supported our research! [[π€ Demo](https://huggingface.co/spaces/etri-vilab/Ko-LLaVA)]
- [10/5] π₯ LLaVA-1.5 is out! Achieving SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out the [technical report](https://arxiv.org/abs/2310.03744), and explore the [demo](https://llava.hliu.cc/)! Models are available in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). The training data and scripts of LLaVA-1.5 are released [here](https://github.com/haotian-liu/LLaVA#train), and evaluation scripts are released [here](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md)!
- [9/26] LLaVA is improved with reinforcement learning from human feedback (RLHF) to improve fact grounding and reduce hallucination. Check out the new SFT and RLHF checkpoints at project [[LLavA-RLHF]](https://llava-rlhf.github.io/)
- [9/22] [LLaVA](https://arxiv.org/abs/2304.08485) is accepted by NeurIPS 2023 as **oral presentation**, and [LLaVA-Med](https://arxiv.org/abs/2306.00890) is accepted by NeurIPS 2023 Datasets and Benchmarks Track as **spotlight presentation**.
<details>
<summary>More</summary>
- [11/6] Support **Intel** dGPU and CPU platforms. [More details here.](https://github.com/haotian-liu/LLaVA/tree/intel/docs/intel)
- [10/12] LLaVA is now supported in [llama.cpp](https://github.com/ggerganov/llama.cpp/pull/3436) with 4-bit / 5-bit quantization support!
- [10/11] The training data and scripts of LLaVA-1.5 are released [here](https://github.com/haotian-liu/LLaVA#train), and evaluation scripts are released [here](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md)!
- [10/10] [Roboflow Deep Dive](https://blog.roboflow.com/first-impressions-with-llava-1-5/): First Impressions with LLaVA-1.5.
- [9/20] We summarize our empirical study of training 33B and 65B LLaVA models in a [note](https://arxiv.org/abs/2309.09958). Further, if you are interested in the comprehensive review, evolution and trend of multimodal foundation models, please check out our recent survey paper [``Multimodal Foundation Models: From Specialists to General-Purpose Assistants''.](https://arxiv.org/abs/2309.10020)
<p align="center">
<img src="https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings/blob/main/images/mfm_evolution.jpeg?raw=true" width=50%/>
</p>
- [7/19] π₯ We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release [LLaVA Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out [LLaVA-from-LLaMA-2](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_from_LLaMA2.md), and our [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)!
- [6/26] [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/) on **Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4**! Please check out [[Slides](https://datarelease.blob.core.windows.net/tutorial/vision_foundation_models_2023/slides/Chunyuan_cvpr2023_tutorial_lmm.pdf)] [[Notes](https://arxiv.org/abs/2306.14895)] [[YouTube](https://youtu.be/mkI7EPD1vp8)] [[Bilibli](https://www.bilibili.com/video/BV1Ng4y1T7v3/)].
- [6/11] We released the preview for the most requested feature: DeepSpeed and LoRA support! Please see documentations [here](./docs/LoRA.md).
- [6/1] We released **LLaVA-Med: Large Language and Vision Assistant for Biomedicine**, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2306.00890) and [page](https://github.com/microsoft/LLaVA-Med).
- [5/6] We are releasing [LLaVA-Lighting-MPT-7B-preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview), based on MPT-7B-Chat! See [here](#LLaVA-MPT-7b) for more details.
- [5/2] π₯ We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See [here](#train-llava-lightning) for more details.
- [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava).
- [4/17] π₯ We released **LLaVA: Large Language and Vision Assistant**. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2304.08485) and [demo](https://llava.hliu.cc/).
</details>
<!-- <a href="https://llava.hliu.cc/"><img src="assets/demo.gif" width="70%"></a> -->
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
**Usage and License Notices**: This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the [OpenAI Terms of Use](https://openai.com/policies/terms-of-use) for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. [Llama community license](https://ai.meta.com/llama/license/) for LLaMA-2 and Vicuna-v1.5). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.
## Contents
- [Install](#install)
- [LLaVA Weights](#llava-weights)
- [Demo](#Demo)
- [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)
- [Dataset](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)
- [Train](#train)
- [Evaluation](#evaluation)
## Install
If you are not using Linux, do *NOT* proceed, see instructions for [macOS](https://github.com/haotian-liu/LLaVA/blob/main/docs/macOS.md) and [Windows](https://github.com/haotian-liu/LLaVA/blob/main/docs/Windows.md).
1. Clone this repository and navigate to LLaVA folder
```bash
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
```
2. Install Package
```Shell
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip # enable PEP 660 support
pip install -e .
```
3. Install additional packages for training cases
```
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
```
### Upgrade to latest code base
```Shell
git pull
pip install -e .
# if you see some import errors when you upgrade, please try running the command below (without #)
# pip install flash-attn --no-build-isolation --no-cache-dir
```
### Quick Start With HuggingFace
<details>
<summary>Example Code</summary>
```Python
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model
model_path = "liuhaotian/llava-v1.5-7b"
tokenizer, model, image_processor, context_len = load_pretrained_model(
model_path=model_path,
model_base=None,
model_name=get_model_name_from_path(model_path)
)
```
Check out the details wth the `load_pretrained_model` function in `llava/model/builder.py`.
You can also use the `eval_model` function in `llava/eval/run_llava.py` to get the output easily. By doing so, you can use this code on Colab directly after downloading this repository.
``` python
model_path = "liuhaotian/llava-v1.5-7b"
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "https://llava-vl.github.io/static/images/view.jpg"
args = type('Args', (), {
"model_path": model_path,
"model_base": None,
"model_name": get_model_name_from_path(model_path),
"query": prompt,
"conv_mode": None,
"image_file": image_file,
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()
eval_model(args)
```
</details>
## LLaVA Weights
Please check out our [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) for all public LLaVA checkpoints, and the instructions of how to use the weights.
## Demo
### Gradio Web UI
To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.
```mermaid
flowchart BT
%% Declare Nodes
gws("Gradio (UI Server)")
c("Controller (API Server):<br/>PORT: 10000")
mw7b("Model Worker:<br/>llava-v1.5-7b<br/>PORT: 40000")
mw13b("Model Worker:<br/>llava-v1.5-13b<br/>PORT: 40001")
sglw13b("SGLang Backend:<br/>llava-v1.6-34b<br/>http://localhost:30000")
lsglw13b("SGLang Worker:<br/>llava-v1.6-34b<br/>PORT: 40002")
%% Declare Styles
classDef data fill:#3af,stroke:#48a,stroke-width:2px,color:#444
classDef success fill:#8f8,stroke:#0a0,stroke-width:2px,color:#444
classDef failure fill:#f88,stroke:#f00,stroke-width:2px,color:#444
%% Assign Styles
class id,od data;
class cimg,cs_s,scsim_s success;
class ncimg,cs_f,scsim_f failure;
subgraph Demo Connections
direction BT
c<-->gws
mw7b<-->c
mw13b<-->c
lsglw13b<-->c
sglw13b<-->lsglw13b
end
```
#### Launch a controller
```Shell
python -m llava.serve.controller --host 0.0.0.0 --port 10000
```
#### Launch a gradio web server.
```Shell
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
```
You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
#### Launch a SGLang worker
This is the recommended way to serve LLaVA model with high throughput, and you need to install SGLang first. Note that currently `4-bit` quantization is not supported yet on SGLang-LLaVA, and if you have limited GPU VRAM, please check out model worker with [quantization](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#launch-a-model-worker-4-bit-8-bit-inference-quantized).
```Shell
pip install "sglang[all]"
```
You'll first launch a SGLang backend worker which will execute the models on GPUs. Remember the `--port` you've set and you'll use that later.
```Shell
# Single GPU
CUDA_VISIBLE_DEVICES=0 python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000
# Multiple GPUs with tensor parallel
CUDA_VISIBLE_DEVICES=0,1 python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-13b --tokenizer-path llava-hf/llava-1.5-13b-hf --port 30000 --tp 2
```
Tokenizers (temporary): `llava-hf/llava-1.5-7b-hf`, `llava-hf/llava-1.5-13b-hf`, `liuhaotian/llava-v1.6-34b-tokenizer`.
You'll then launch a LLaVA-SGLang worker that will communicate between LLaVA controller and SGLang backend to route the requests. Set `--sgl-endpoint` to `http://127.0.0.1:port` where `port` is the one you just set (default: 30000).
```Shell
python -m llava.serve.sglang_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --sgl-endpoint http://127.0.0.1:30000
```
#### Launch a model worker
This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in `--model-path`.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
```
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>
```
If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the `--device` flag: `--device mps`.
#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
If the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with `CUDA_VISIBLE_DEVICES`. Below is an example of running with the first two GPUs.
```Shell
CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
```
#### Launch a model worker (4-bit, 8-bit inference, quantized)
You can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint, potentially allowing you to run on a GPU with as few as 12GB VRAM. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append `--load-4bit` or `--load-8bit` to the **model worker** command that you are executing. Below is an example of running with 4-bit quantization.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit
```
#### Launch a model worker (LoRA weights, unmerged)
You can launch the model worker with LoRA weights, without merging them with the base checkpoint, to save disk space. There will be additional loading time, while the inference speed is the same as the merged checkpoints. Unmerged LoRA checkpoints do not have `lora-merge` in the model name, and are usually much smaller (less than 1GB) than the merged checkpoints (13G for 7B, and 25G for 13B).
To load unmerged LoRA weights, you simply need to pass an additional argument `--model-base`, which is the base LLM that is used to train the LoRA weights. You can check the base LLM of each LoRA weights in the [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md).
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 --model-base lmsys/vicuna-13b-v1.3
```
### CLI Inference
Chat about images using LLaVA without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization, for our LLaVA-1.5-7B, it uses less than 8GB VRAM on a single GPU.
```Shell
python -m llava.serve.cli \
--model-path liuhaotian/llava-v1.5-7b \
--image-file "https://llava-vl.github.io/static/images/view.jpg" \
--load-4bit
```
<img src="images/demo_cli.gif" width="70%">
## Train
*Below is the latest training configuration for LLaVA v1.5. For legacy models, please refer to README of [this](https://github.com/haotian-liu/LLaVA/tree/v1.0.1) version for now. We'll add them in a separate doc later.*
LLaVA training consists of two stages: (1) feature alignment stage: use our 558K subset of the LAION-CC-SBU dataset to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following data, plus around 515K VQA data from academic-oriented tasks, to teach the model to follow multimodal instructions.
LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps` x `num_gpus`.
### Hyperparameters
We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
1. Pretraining
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-v1.5-13B | 256 | 1e-3 | 1 | 2048 | 0 |
2. Finetuning
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-v1.5-13B | 128 | 2e-5 | 1 | 2048 | 0 |
### Download Vicuna checkpoints (automatically)
Our base model Vicuna v1.5, which is an instruction-tuned chatbot, will be downloaded automatically when you run our provided training scripts. No action is needed.
### Pretrain (feature alignment)
Please download the 558K subset of the LAION-CC-SBU dataset with BLIP captions we use in the paper [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain).
Pretrain takes around 5.5 hours for LLaVA-v1.5-13B on 8x A100 (80G), due to the increased resolution to 336px. It takes around 3.5 hours for LLaVA-v1.5-7B.
Training script with DeepSpeed ZeRO-2: [`pretrain.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/pretrain.sh).
- `--mm_projector_type mlp2x_gelu`: the two-layer MLP vision-language connector.
- `--vision_tower openai/clip-vit-large-patch14-336`: CLIP ViT-L/14 336px.
<details>
<summary>Pretrain takes around 20 hours for LLaVA-7B on 8x V100 (32G)</summary>
We provide training script with DeepSpeed [here](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain_xformers.sh).
Tips:
- If you are using V100 which is not supported by FlashAttention, you can use the [memory-efficient attention](https://arxiv.org/abs/2112.05682) implemented in [xFormers](https://github.com/facebookresearch/xformers). Install xformers and replace `llava/train/train_mem.py` above with [llava/train/train_xformers.py](llava/train/train_xformers.py).
</details>
### Visual Instruction Tuning
1. Prepare data
Please download the annotation of the final mixture our instruction tuning data [llava_v1_5_mix665k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json), and download the images from constituting datasets:
- COCO: [train2017](http://images.cocodataset.org/zips/train2017.zip)
- GQA: [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)
- OCR-VQA: [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing), **we save all files as `.jpg`**
- TextVQA: [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)
- VisualGenome: [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
After downloading all of them, organize the data as follows in `./playground/data`,
```
βββ coco
β βββ train2017
βββ gqa
β βββ images
βββ ocr_vqa
β βββ images
βββ textvqa
β βββ train_images
βββ vg
βββ VG_100K
βββ VG_100K_2
```
2. Start training!
You may download our pretrained projectors in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.
Visual instruction tuning takes around 20 hours for LLaVA-v1.5-13B on 8x A100 (80G), due to the increased resolution to 336px. It takes around 10 hours for LLaVA-v1.5-7B on 8x A100 (40G).
Training script with DeepSpeed ZeRO-3: [`finetune.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune.sh).
If you are do not have enough GPU memory:
- Use LoRA: [`finetune_lora.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune_lora.sh). We are able to fit 13B training in 8-A100-40G/8-A6000, and 7B training in 8-RTX3090. Make sure `per_device_train_batch_size*gradient_accumulation_steps` is the same as the provided script for best reproducibility.
- Replace `zero3.json` with `zero3_offload.json` which offloads some parameters to CPU RAM. This slows down the training speed.
If you are interested in finetuning LLaVA model to your own task/data, please check out [`Finetune_Custom_Data.md`](https://github.com/haotian-liu/LLaVA/blob/main/docs/Finetune_Custom_Data.md)γ
New options to note:
- `--mm_projector_type mlp2x_gelu`: the two-layer MLP vision-language connector.
- `--vision_tower openai/clip-vit-large-patch14-336`: CLIP ViT-L/14 336px.
- `--image_aspect_ratio pad`: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination.
- `--group_by_modality_length True`: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome.
## Evaluation
In LLaVA-1.5, we evaluate models on a diverse set of 12 benchmarks. To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs.
See [Evaluation.md](https://github.com/haotian-liu/LLaVA/blob/main/docs/Evaluation.md).
### GPT-assisted Evaluation
Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
1. Generate LLaVA responses
```Shell
python model_vqa.py \
--model-path ./checkpoints/LLaVA-13B-v0 \
--question-file \
playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
--image-folder \
/path/to/coco2014_val \
--answers-file \
/path/to/answer-file-our.jsonl
```
2. Evaluate the generated responses. In our case, [`answer-file-ref.jsonl`](./playground/data/coco2014_val_qa_eval/qa90_gpt4_answer.jsonl) is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
```Shell
OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \
--question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
--context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \
--answer-list \
/path/to/answer-file-ref.jsonl \
/path/to/answer-file-our.jsonl \
--rule llava/eval/table/rule.json \
--output /path/to/review.json
```
3. Summarize the evaluation results
```Shell
python summarize_gpt_review.py
```
## Citation
If you find LLaVA useful for your research and applications, please cite using this BibTeX:
```bibtex
@misc{liu2024llavanext,
title={LLaVA-NeXT: Improved reasoning, OCR, and world knowledge},
url={https://llava-vl.github.io/blog/2024-01-30-llava-next/},
author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Li, Bo and Zhang, Yuanhan and Shen, Sheng and Lee, Yong Jae},
month={January},
year={2024}
}
@misc{liu2023improvedllava,
title={Improved Baselines with Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
publisher={arXiv:2310.03744},
year={2023},
}
@misc{liu2023llava,
title={Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
publisher={NeurIPS},
year={2023},
}
```
## Acknowledgement
- [Vicuna](https://github.com/lm-sys/FastChat): the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
## Related Projects
- [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://github.com/microsoft/LLaVA-Med)
- [Otter: In-Context Multi-Modal Instruction Tuning](https://github.com/Luodian/Otter)
For future project ideas, please check out:
- [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
- [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to detect, segment, and generate anything by marrying [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment-Anything](https://github.com/facebookresearch/segment-anything).
|
LHRuig/danielsharmansx | LHRuig | "2025-03-30T23:55:55Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-30T23:55:51Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: danielsharmansx
---
# danielsharmansx
<Gallery />
## Model description
danielsharmansx lora
## Trigger words
You should use `danielsharmansx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/danielsharmansx/tree/main) them in the Files & versions tab.
|
NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF | NikolayKozloff | "2024-09-01T19:33:31Z" | 6 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"fi",
"sv",
"no",
"da",
"is",
"nn",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:mpasila/Sonnet3.5-SlimOrcaDedupCleaned-4k-context",
"base_model:mpasila/Viking-SlimSonnet-v1-7B",
"base_model:quantized:mpasila/Viking-SlimSonnet-v1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-01T19:32:53Z" | ---
base_model: mpasila/Viking-SlimSonnet-v1-7B
datasets:
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- mpasila/Sonnet3.5-SlimOrcaDedupCleaned-4k-context
language:
- en
- fi
- sv
- 'no'
- da
- is
- nn
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF
This model was converted to GGUF format from [`mpasila/Viking-SlimSonnet-v1-7B`](https://huggingface.co/mpasila/Viking-SlimSonnet-v1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/Viking-SlimSonnet-v1-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF --hf-file viking-slimsonnet-v1-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF --hf-file viking-slimsonnet-v1-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF --hf-file viking-slimsonnet-v1-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-SlimSonnet-v1-7B-Q8_0-GGUF --hf-file viking-slimsonnet-v1-7b-q8_0.gguf -c 2048
```
|
infinitejoy/wav2vec2-large-xls-r-300m-galician | infinitejoy | "2022-03-23T18:34:49Z" | 32 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gl",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- gl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gl
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Galician
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: gl
metrics:
- name: Test WER
type: wer
value: 101.54
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 105.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 101.95
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-galician
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1525
- Wer: 0.1542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0067 | 4.35 | 500 | 2.9632 | 1.0 |
| 1.4939 | 8.7 | 1000 | 0.5005 | 0.4157 |
| 0.9982 | 13.04 | 1500 | 0.1967 | 0.1857 |
| 0.8726 | 17.39 | 2000 | 0.1587 | 0.1564 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
goldfish-models/som_latn_10mb | goldfish-models | "2024-08-26T16:51:29Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"goldfish",
"arxiv:2408.10441",
"som",
"dataset:allenai/MADLAD-400",
"dataset:allenai/nllb",
"dataset:cis-lmu/Glot500",
"dataset:castorini/afriberta-corpus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-13T09:26:06Z" |
---
license: apache-2.0
language:
- som
datasets:
- allenai/MADLAD-400
- allenai/nllb
- cis-lmu/Glot500
- castorini/afriberta-corpus
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish
- arxiv:2408.10441
---
# som_latn_10mb
Goldfish is a suite of monolingual language models trained for 350 languages.
This model is the <b>Somali</b> (Latin script) model trained on 10MB of data, after accounting for an estimated byte premium of 1.42; content-matched text in Somali takes on average 1.42x as many UTF-8 bytes to encode as English.
The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: som_latn is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)
## Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
Details for this model specifically:
* Architecture: gpt2
* Parameters: 39087104
* Maximum sequence length: 512 tokens
* Training text data (raw): 14.22MB
* Training text data (byte premium scaled): 10.005MB
* Training tokens: 3095552 (x10 epochs)
* Vocabulary size: 50000
* Compute cost: 2341346381660160.0 FLOPs or ~0.2 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
* 38.68291%: [MADLAD-400 (CommonCrawl)](https://huggingface.co/datasets/allenai/MADLAD-400)
* 30.77395%: [NLLB (CommonCrawl and ParaCrawl)](https://huggingface.co/datasets/allenai/nllb)
* 22.30160%: [Glot500](https://huggingface.co/datasets/cis-lmu/Glot500), including [AfriBERTa](https://huggingface.co/datasets/castorini/afriberta-corpus), [AfroMAFT](https://zenodo.org/record/6990611#.Y0-yU-xBw-Q), [CCNet](https://github.com/facebookresearch/cc_net), [Earthlings](https://publicdata.canterbury.ac.nz/Research/Geocorpus/CCGLU_v5.0/), [HornMT](https://github.com/asmelashteka/HornMT), [Wortschatz Leipzig Data](https://wortschatz.uni-leipzig.de/en/download), [TICO](https://tico-19.github.io/)
* 7.64665%: [AfriBERTa](https://huggingface.co/datasets/castorini/afriberta-corpus)
* 0.41754%: [Wikipedia 2023/08](https://dumps.wikimedia.org/)
* 0.17735%: [eBible](https://ebible.org/find/)
## Citation
If you use this model, please cite:
```
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
```
|
mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF | mradermacher | "2025-04-10T20:22:11Z" | 94 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"zh",
"fr",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"fa",
"he",
"tr",
"cs",
"pl",
"hi",
"bn",
"ur",
"id",
"ms",
"lo",
"my",
"ceb",
"km",
"tl",
"nl",
"dataset:openai/gsm8k",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:Spestly/Athena-R3-1.5B",
"base_model:quantized:Spestly/Athena-R3-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-27T22:05:54Z" | ---
base_model: Spestly/Athena-R3-1.5B
datasets:
- openai/gsm8k
- HuggingFaceH4/ultrachat_200k
extra_gated_fields:
Country: country
Date of Birth: date_picker
I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
I agree to use this model under the MIT licence: checkbox
Intended Use:
options:
- Research
- Education
- Personal Development
- Commercial Use
- label: Other
value: other
type: select
Name: text
Organization: text
extra_gated_prompt: By accessing this model, you agree to comply with ethical usage
guidelines and accept full responsibility for its applications. You will not use
this model for harmful, malicious, or illegal activities, and you understand that
the model's use is subject to ongoing monitoring for misuse. This model is provided
'AS IS' and agreeing to this means that you are responsible for all the outputs
generated by you
language:
- en
- zh
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- fa
- he
- tr
- cs
- pl
- hi
- bn
- ur
- id
- ms
- lo
- my
- ceb
- km
- tl
- nl
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Spestly/Athena-R3-1.5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-1.5B-Preview-i1-GGUF/resolve/main/Atlas-Pro-1.5B-Preview.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shafin/chemical-bert-uncased-finetuned-cust-c2 | shafin | "2022-11-12T18:42:36Z" | 106 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-11-12T16:35:07Z" | ---
tags:
- generated_from_trainer
model-index:
- name: chemical-bert-uncased-finetuned-cust-c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chemical-bert-uncased-finetuned-cust-c2
This model is a fine-tuned version of [shafin/chemical-bert-uncased-finetuned-cust](https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9422 | 1.0 | 63 | 1.6236 |
| 1.6662 | 2.0 | 126 | 1.5136 |
| 1.5299 | 3.0 | 189 | 1.4435 |
| 1.4542 | 4.0 | 252 | 1.2997 |
| 1.374 | 5.0 | 315 | 1.2431 |
| 1.2944 | 6.0 | 378 | 1.1990 |
| 1.2439 | 7.0 | 441 | 1.1733 |
| 1.2304 | 8.0 | 504 | 1.1494 |
| 1.1495 | 9.0 | 567 | 1.1410 |
| 1.1325 | 10.0 | 630 | 1.1208 |
| 1.0798 | 11.0 | 693 | 1.0691 |
| 1.074 | 12.0 | 756 | 1.0918 |
| 1.0422 | 13.0 | 819 | 1.0823 |
| 1.0124 | 14.0 | 882 | 1.0101 |
| 1.0172 | 15.0 | 945 | 0.9742 |
| 0.9821 | 16.0 | 1008 | 0.9740 |
| 0.9347 | 17.0 | 1071 | 0.9711 |
| 0.9193 | 18.0 | 1134 | 0.9291 |
| 0.9229 | 19.0 | 1197 | 0.9317 |
| 0.8751 | 20.0 | 1260 | 0.9331 |
| 0.8914 | 21.0 | 1323 | 0.9137 |
| 0.8686 | 22.0 | 1386 | 0.9209 |
| 0.8482 | 23.0 | 1449 | 0.8724 |
| 0.8201 | 24.0 | 1512 | 0.8512 |
| 0.8131 | 25.0 | 1575 | 0.8753 |
| 0.8123 | 26.0 | 1638 | 0.8651 |
| 0.8046 | 27.0 | 1701 | 0.8374 |
| 0.7668 | 28.0 | 1764 | 0.8981 |
| 0.7732 | 29.0 | 1827 | 0.8691 |
| 0.7567 | 30.0 | 1890 | 0.7845 |
| 0.7465 | 31.0 | 1953 | 0.8493 |
| 0.7451 | 32.0 | 2016 | 0.8270 |
| 0.7211 | 33.0 | 2079 | 0.8148 |
| 0.7006 | 34.0 | 2142 | 0.8163 |
| 0.7107 | 35.0 | 2205 | 0.7866 |
| 0.6889 | 36.0 | 2268 | 0.7712 |
| 0.674 | 37.0 | 2331 | 0.7762 |
| 0.6847 | 38.0 | 2394 | 0.7583 |
| 0.6639 | 39.0 | 2457 | 0.7800 |
| 0.6615 | 40.0 | 2520 | 0.8270 |
| 0.6566 | 41.0 | 2583 | 0.7851 |
| 0.6364 | 42.0 | 2646 | 0.7645 |
| 0.6261 | 43.0 | 2709 | 0.7044 |
| 0.6338 | 44.0 | 2772 | 0.7952 |
| 0.6315 | 45.0 | 2835 | 0.7439 |
| 0.6122 | 46.0 | 2898 | 0.7566 |
| 0.5941 | 47.0 | 2961 | 0.7124 |
| 0.6076 | 48.0 | 3024 | 0.7591 |
| 0.59 | 49.0 | 3087 | 0.7473 |
| 0.5838 | 50.0 | 3150 | 0.6961 |
| 0.5931 | 51.0 | 3213 | 0.7604 |
| 0.5847 | 52.0 | 3276 | 0.7260 |
| 0.5691 | 53.0 | 3339 | 0.7309 |
| 0.5778 | 54.0 | 3402 | 0.7200 |
| 0.5464 | 55.0 | 3465 | 0.7014 |
| 0.5592 | 56.0 | 3528 | 0.7567 |
| 0.555 | 57.0 | 3591 | 0.7062 |
| 0.5436 | 58.0 | 3654 | 0.7284 |
| 0.5328 | 59.0 | 3717 | 0.6896 |
| 0.5397 | 60.0 | 3780 | 0.7041 |
| 0.5263 | 61.0 | 3843 | 0.7029 |
| 0.5181 | 62.0 | 3906 | 0.7223 |
| 0.5166 | 63.0 | 3969 | 0.7043 |
| 0.5066 | 64.0 | 4032 | 0.6723 |
| 0.5115 | 65.0 | 4095 | 0.6871 |
| 0.4956 | 66.0 | 4158 | 0.6818 |
| 0.5006 | 67.0 | 4221 | 0.7075 |
| 0.4837 | 68.0 | 4284 | 0.6686 |
| 0.4874 | 69.0 | 4347 | 0.6943 |
| 0.4808 | 70.0 | 4410 | 0.6584 |
| 0.4775 | 71.0 | 4473 | 0.6954 |
| 0.4776 | 72.0 | 4536 | 0.6741 |
| 0.4773 | 73.0 | 4599 | 0.6591 |
| 0.4699 | 74.0 | 4662 | 0.7000 |
| 0.4779 | 75.0 | 4725 | 0.6829 |
| 0.4543 | 76.0 | 4788 | 0.6839 |
| 0.4641 | 77.0 | 4851 | 0.6444 |
| 0.4495 | 78.0 | 4914 | 0.6604 |
| 0.4489 | 79.0 | 4977 | 0.6713 |
| 0.4394 | 80.0 | 5040 | 0.6905 |
| 0.4461 | 81.0 | 5103 | 0.6879 |
| 0.4386 | 82.0 | 5166 | 0.6458 |
| 0.4529 | 83.0 | 5229 | 0.6306 |
| 0.4261 | 84.0 | 5292 | 0.6291 |
| 0.4306 | 85.0 | 5355 | 0.6518 |
| 0.4428 | 86.0 | 5418 | 0.6456 |
| 0.4336 | 87.0 | 5481 | 0.6686 |
| 0.4105 | 88.0 | 5544 | 0.6735 |
| 0.4281 | 89.0 | 5607 | 0.6645 |
| 0.4172 | 90.0 | 5670 | 0.6527 |
| 0.4037 | 91.0 | 5733 | 0.6004 |
| 0.4137 | 92.0 | 5796 | 0.6643 |
| 0.4135 | 93.0 | 5859 | 0.6783 |
| 0.3988 | 94.0 | 5922 | 0.6687 |
| 0.4172 | 95.0 | 5985 | 0.6486 |
| 0.3819 | 96.0 | 6048 | 0.6466 |
| 0.3938 | 97.0 | 6111 | 0.5946 |
| 0.4053 | 98.0 | 6174 | 0.6146 |
| 0.3988 | 99.0 | 6237 | 0.6166 |
| 0.3798 | 100.0 | 6300 | 0.6383 |
| 0.386 | 101.0 | 6363 | 0.6631 |
| 0.3962 | 102.0 | 6426 | 0.6298 |
| 0.399 | 103.0 | 6489 | 0.6251 |
| 0.3851 | 104.0 | 6552 | 0.6339 |
| 0.3767 | 105.0 | 6615 | 0.6610 |
| 0.3756 | 106.0 | 6678 | 0.6292 |
| 0.375 | 107.0 | 6741 | 0.6201 |
| 0.3648 | 108.0 | 6804 | 0.6384 |
| 0.3664 | 109.0 | 6867 | 0.6046 |
| 0.3679 | 110.0 | 6930 | 0.6169 |
| 0.368 | 111.0 | 6993 | 0.6450 |
| 0.3605 | 112.0 | 7056 | 0.6518 |
| 0.3675 | 113.0 | 7119 | 0.6082 |
| 0.3559 | 114.0 | 7182 | 0.6232 |
| 0.3563 | 115.0 | 7245 | 0.6438 |
| 0.3664 | 116.0 | 7308 | 0.6381 |
| 0.3662 | 117.0 | 7371 | 0.6412 |
| 0.3596 | 118.0 | 7434 | 0.6631 |
| 0.3447 | 119.0 | 7497 | 0.6065 |
| 0.3421 | 120.0 | 7560 | 0.6072 |
| 0.347 | 121.0 | 7623 | 0.5787 |
| 0.3474 | 122.0 | 7686 | 0.6343 |
| 0.3426 | 123.0 | 7749 | 0.6114 |
| 0.3418 | 124.0 | 7812 | 0.6084 |
| 0.3485 | 125.0 | 7875 | 0.6188 |
| 0.3411 | 126.0 | 7938 | 0.6112 |
| 0.3371 | 127.0 | 8001 | 0.5991 |
| 0.3353 | 128.0 | 8064 | 0.5861 |
| 0.3318 | 129.0 | 8127 | 0.6419 |
| 0.3417 | 130.0 | 8190 | 0.6272 |
| 0.3235 | 131.0 | 8253 | 0.6293 |
| 0.3363 | 132.0 | 8316 | 0.6017 |
| 0.3358 | 133.0 | 8379 | 0.5816 |
| 0.3273 | 134.0 | 8442 | 0.6384 |
| 0.3277 | 135.0 | 8505 | 0.6063 |
| 0.3336 | 136.0 | 8568 | 0.6482 |
| 0.3205 | 137.0 | 8631 | 0.6428 |
| 0.3136 | 138.0 | 8694 | 0.6322 |
| 0.3212 | 139.0 | 8757 | 0.6218 |
| 0.3275 | 140.0 | 8820 | 0.6328 |
| 0.3227 | 141.0 | 8883 | 0.6406 |
| 0.3166 | 142.0 | 8946 | 0.6317 |
| 0.3111 | 143.0 | 9009 | 0.6308 |
| 0.309 | 144.0 | 9072 | 0.5972 |
| 0.316 | 145.0 | 9135 | 0.6229 |
| 0.3163 | 146.0 | 9198 | 0.6244 |
| 0.3125 | 147.0 | 9261 | 0.6195 |
| 0.3164 | 148.0 | 9324 | 0.5676 |
| 0.3151 | 149.0 | 9387 | 0.6225 |
| 0.3014 | 150.0 | 9450 | 0.6044 |
| 0.3106 | 151.0 | 9513 | 0.6262 |
| 0.3065 | 152.0 | 9576 | 0.5927 |
| 0.2982 | 153.0 | 9639 | 0.6402 |
| 0.3054 | 154.0 | 9702 | 0.6329 |
| 0.3172 | 155.0 | 9765 | 0.6227 |
| 0.3005 | 156.0 | 9828 | 0.5882 |
| 0.3174 | 157.0 | 9891 | 0.6049 |
| 0.3023 | 158.0 | 9954 | 0.5990 |
| 0.3013 | 159.0 | 10017 | 0.5909 |
| 0.3044 | 160.0 | 10080 | 0.6317 |
| 0.298 | 161.0 | 10143 | 0.6237 |
| 0.2984 | 162.0 | 10206 | 0.6074 |
| 0.3075 | 163.0 | 10269 | 0.5746 |
| 0.2921 | 164.0 | 10332 | 0.5633 |
| 0.3014 | 165.0 | 10395 | 0.6034 |
| 0.297 | 166.0 | 10458 | 0.6420 |
| 0.2936 | 167.0 | 10521 | 0.6206 |
| 0.2946 | 168.0 | 10584 | 0.5869 |
| 0.2923 | 169.0 | 10647 | 0.5898 |
| 0.2936 | 170.0 | 10710 | 0.5810 |
| 0.2968 | 171.0 | 10773 | 0.5888 |
| 0.2863 | 172.0 | 10836 | 0.6124 |
| 0.3038 | 173.0 | 10899 | 0.5823 |
| 0.2845 | 174.0 | 10962 | 0.6187 |
| 0.2847 | 175.0 | 11025 | 0.5749 |
| 0.2984 | 176.0 | 11088 | 0.5900 |
| 0.297 | 177.0 | 11151 | 0.6243 |
| 0.2914 | 178.0 | 11214 | 0.5839 |
| 0.2904 | 179.0 | 11277 | 0.6085 |
| 0.2946 | 180.0 | 11340 | 0.6257 |
| 0.2934 | 181.0 | 11403 | 0.5918 |
| 0.2858 | 182.0 | 11466 | 0.6072 |
| 0.2912 | 183.0 | 11529 | 0.6394 |
| 0.2771 | 184.0 | 11592 | 0.5962 |
| 0.289 | 185.0 | 11655 | 0.6039 |
| 0.2801 | 186.0 | 11718 | 0.5819 |
| 0.2875 | 187.0 | 11781 | 0.6264 |
| 0.2875 | 188.0 | 11844 | 0.6156 |
| 0.2853 | 189.0 | 11907 | 0.5968 |
| 0.2874 | 190.0 | 11970 | 0.6028 |
| 0.2844 | 191.0 | 12033 | 0.5767 |
| 0.2855 | 192.0 | 12096 | 0.6124 |
| 0.2879 | 193.0 | 12159 | 0.5856 |
| 0.2801 | 194.0 | 12222 | 0.6163 |
| 0.2902 | 195.0 | 12285 | 0.5939 |
| 0.2879 | 196.0 | 12348 | 0.5780 |
| 0.2946 | 197.0 | 12411 | 0.6052 |
| 0.2801 | 198.0 | 12474 | 0.6251 |
| 0.287 | 199.0 | 12537 | 0.5839 |
| 0.2864 | 200.0 | 12600 | 0.5768 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
sail-rvc/jasonsonicv1_e390_s12480 | sail-rvc | "2023-07-14T07:39:38Z" | 2 | 1 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:38:39Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# jasonsonicv1_e390_s12480
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:39:38
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
ahmedheakl/asm2asm-deepseek-1.3b-500k-4ep-local-x86-O0-arm-gnueabi-gcc | ahmedheakl | "2024-10-18T05:44:02Z" | 135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-17T18:13:28Z" | ---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-500k-4ep-local-x86-O0-arm-gnueabi-gcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-500k-4ep-local-x86-O0-arm-gnueabi-gcc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.0663e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
MinaMila/llama_instbase_Adult_9ep_66 | MinaMila | "2025-04-02T04:07:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T04:03:58Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/granite-guardian-3.2-5b-i1-GGUF | mradermacher | "2025-03-05T03:07:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ibm-granite/granite-guardian-3.2-5b",
"base_model:quantized:ibm-granite/granite-guardian-3.2-5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-05T00:04:26Z" | ---
base_model: ibm-granite/granite-guardian-3.2-5b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ibm-granite/granite-guardian-3.2-5b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/granite-guardian-3.2-5b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q2_K.gguf) | i1-Q2_K | 2.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ3_S.gguf) | i1-IQ3_S | 2.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ3_M.gguf) | i1-IQ3_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q4_0.gguf) | i1-Q4_0 | 3.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 3.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q4_1.gguf) | i1-Q4_1 | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/granite-guardian-3.2-5b-i1-GGUF/resolve/main/granite-guardian-3.2-5b.i1-Q6_K.gguf) | i1-Q6_K | 4.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pavanpreet-gandhi/babyai-classical-ppo-prefinal-experiments-2025-04-09_19-09-44 | pavanpreet-gandhi | "2025-04-09T19:20:37Z" | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | "2025-04-09T19:09:50Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
godofmining/isagiyoichi1 | godofmining | "2025-02-06T07:10:22Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-06T07:08:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Katelie/Cartpole-v1 | Katelie | "2024-02-02T18:49:58Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-02T18:39:05Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
wrice/wavlm-base-weight-norm-fix | wrice | "2024-09-01T18:10:36Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"wavlm",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-09-01T18:10:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SafetyMary/ppo-rnd-Pyramids | SafetyMary | "2023-09-13T03:32:50Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-09-13T03:32:44Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SafetyMary/ppo-rnd-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
research-dump/roberta-base_mixed_sft_random | research-dump | "2024-06-08T13:01:34Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-08T13:01:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Youssef1234/whisper-base-specAug-non | Youssef1234 | "2024-06-03T20:47:53Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Youssef1234/whisper-base-specAug",
"base_model:finetune:Youssef1234/whisper-base-specAug",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-03T10:10:04Z" | ---
license: apache-2.0
base_model: Youssef1234/whisper-base-specAug
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-specAug-non
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-specAug-non
This model is a fine-tuned version of [Youssef1234/whisper-base-specAug](https://huggingface.co/Youssef1234/whisper-base-specAug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3537
- Wer: 17.1040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.208 | 0.25 | 239 | 0.3320 | 15.8293 |
| 0.1384 | 0.5 | 478 | 0.3435 | 17.0993 |
| 0.109 | 0.75 | 717 | 0.3537 | 17.1040 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.15.2
|
YakovElm/Jira5Classic_Balance_DATA_ratio_1 | YakovElm | "2023-05-31T00:26:03Z" | 58 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-31T00:25:27Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira5Classic_Balance_DATA_ratio_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira5Classic_Balance_DATA_ratio_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5531
- Train Accuracy: 0.7171
- Validation Loss: 0.5869
- Validation Accuracy: 0.6697
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6711 | 0.5780 | 0.6148 | 0.6560 | 0 |
| 0.5881 | 0.6713 | 0.5785 | 0.6789 | 1 |
| 0.5531 | 0.7171 | 0.5869 | 0.6697 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
barttee/prci | barttee | "2025-02-24T00:06:00Z" | 54 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-17T16:25:50Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: prci
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: >-
Photorealistic photo of prci, gray nissan qashqai, standing on a grey brick
pavement, trees and blue sky in the background. Taken by phone
output:
url: images/example_88v40fgl6.png
- text: >-
Photorealistic photo of prci, side of blue toyota avensis, neighboorhood
houses and blue sky in the background. Taken by phone
output:
url: images/example_ls5wtfmis.png
- text: >-
Photorealistic photo of prci, a dark blue bmw m8 coupe, standing on a
sidewalk in neighboorhood, realistic lighting, no plate
output:
url: images/example_5nc9cy385.png
- text: >-
Photorealistic photo of prci, black skoda octavia 2019 with a broken left
side, after crash accident, open hood, standing on a grey bricked pavement
with a empty parking field, brick, some cars in the background standing in
line, grey sky
output:
url: images/example_z05h8t0i5.png
---
# Photorealistic Car Images
A Flux LoRA trained locally on dataset of various low/midrange cars close-up photos with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
You can use it to get better quality of car photos with realistic reflections, and more realistic backgrounds.
<Gallery />
## Trigger words
You should use `prci` to trigger the image generation with lora.
Recommended prompt format is:
"prci, [car color, brand, model], [environment/background description]"
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Kiefels/dwayne-dibley-flux-v2 | Kiefels | "2025-02-11T14:17:15Z" | 65 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-25T21:46:04Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/dwayne-dibley-flux-v2_003360_00_20250125214426.png
text: Dwayne Dibbley, Dwayne Dibley, Duane Dibley
- text: >-
Dwayne Dibbley, is standing in a 1980s disco dancefloor wearing flared tweed
trousers, brown plastic open toed sandals and a white nylon shirt, moving
embarrasingly toward some fit women
output:
url: images/example_uft6bsu1o.png
- text: >-
Dwayne Dibbley, is standing in a 1970s disco dancefloor wearing flared tweed
trousers, brown plastic open toed sandals and a white nylon shirt, dancing
like a dork
output:
url: images/example_kwmo9i51t.png
- text: >-
Dwayne Dibbley, holding up an old thermos flask and a blue tooth brush,
smiling and happy as he is stood ready to go out on a date
output:
url: images/example_heirs6oci.png
- text: >-
Dwayne Dibley is opening a bottle of beer labelled "Red Dwarf, Wicked
Strength Lager" using just his teeth.
output:
url: images/example_0vm2ystln.png
- text: >-
Tall and skinny Dwayne Dibley , wide angle full body shot in extreme detail
8K , standing on a train station platform, holding a placard saying I'm a no
sense gimboid!!!, wearing a green Anorak, brown corduroy, flared trousers,
brown plastic sandals and white socks.
output:
url: images/example_imvnqid7q.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dwayne Dibbley, Dwayne Dibley, Duane Dibley
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# dwayne-dibley-flux-v2
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Dwayne Dibbley, Dwayne Dibley, Duane Dibley` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
sd-concepts-library/star-tours-posters | sd-concepts-library | "2022-09-09T12:38:42Z" | 0 | 3 | null | [
"license:mit",
"region:us"
] | null | "2022-09-09T12:38:36Z" | ---
license: mit
---
### Star Tours Posters on Stable Diffusion
This is the `<star-tours>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
hwaback/roberta-base-klue-ynat-classification | hwaback | "2025-03-24T14:00:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-24T13:54:17Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/hwaback/roberta-base-klue-ynat-classification/d62c9b5a23af6d18734ad758532daf5aeeec7468/README.md?%2Fhwaback%2Froberta-base-klue-ynat-classification%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc5f30d6632ac0efdc7be2e9095e9e9579af2e33%22 |
myst72/Llama-3-8B_MIFT-en_Alldata_v3_QLoRA-PIFT-EnJa_manywords-1000_v0 | myst72 | "2025-03-04T12:56:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T12:50:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/nacha-pony-checkpoint-v05-sdxl | John6666 | "2024-08-31T13:24:38Z" | 220 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cute",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-31T13:14:55Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cute
- pony
---
Original model is [here](https://civitai.com/models/705882/nacha-pony-checkpoint?modelVersionId=789563).
This model created by [Cocohead](https://civitai.com/user/Cocohead).
|
RichardErkhov/laurenhyoseoyoon_-_gemma2-2b-it-finetune-locations-full-2-awq | RichardErkhov | "2025-01-06T18:42:17Z" | 5 | 0 | null | [
"safetensors",
"gemma2",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-06T18:40:22Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma2-2b-it-finetune-locations-full-2 - AWQ
- Model creator: https://huggingface.co/laurenhyoseoyoon/
- Original model: https://huggingface.co/laurenhyoseoyoon/gemma2-2b-it-finetune-locations-full-2/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF | mradermacher | "2025-01-22T06:35:26Z" | 1,859 | 7 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"12b",
"chat",
"roleplay",
"creative-writing",
"DELLA-linear",
"en",
"base_model:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2",
"base_model:quantized:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-19T11:00:57Z" | ---
base_model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- DELLA-linear
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2-GGUF/resolve/main/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v2.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
prithivMLmods/Triangulum-10B-it | prithivMLmods | "2025-01-04T12:58:58Z" | 21 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"triangulum_10b",
"sft",
"chain_of_thought",
"ollama",
"text-generation-inference",
"llama_for_causal_lm",
"reasoning",
"CoT",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-04T07:48:35Z" | ---
license: creativeml-openrail-m
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
tags:
- triangulum_10b
- sft
- chain_of_thought
- ollama
- text-generation-inference
- llama_for_causal_lm
- reasoning
- CoT
library_name: transformers
metrics:
- code_eval
- accuracy
- competition_math
- character
---

<pre align="center">
__ .__ .__
_/ |_ _______ |__|_____ ____ ____ __ __ | | __ __ _____
\ __\\_ __ \| |\__ \ / \ / ___\ | | \| | | | \ / \
| | | | \/| | / __ \_| | \/ /_/ >| | /| |__| | /| Y Y \
|__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
\/ \//_____/ \/
</pre>
# **Triangulum 10B it: Multilingual Large Language Models (LLMs)**
Triangulum 10B it Base is a collection of pretrained and instruction-tuned generative models, designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.
# **Key Features**
- **Foundation Model**: Built upon LLaMA's autoregressive language model, leveraging an optimized transformer architecture for enhanced performance.
- **Instruction Tuning**: Includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align model outputs with human preferences for helpfulness and safety.
- **Multilingual Support**: Designed to handle multiple languages, ensuring broad applicability across diverse linguistic contexts.
# **Training Approach**
1. **Synthetic Datasets**: Utilizes long chain-of-thought synthetic data to enhance reasoning capabilities.
2. **Supervised Fine-Tuning (SFT)**: Aligns the model to specific tasks through curated datasets.
3. **Reinforcement Learning with Human Feedback (RLHF)**: Ensures the model adheres to human values and safety guidelines through iterative training processes.
# **How to use with transformers**
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "prithivMLmods/Triangulum-10B"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are the kind and tri-intelligent assistant helping people to understand complex concepts."},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
# **Demo Inference LlamaForCausalLM**
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('prithivMLmods/Triangulum-10B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"prithivMLmods/Triangulum-10B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
# Define a list of system and user prompts
prompts = [
"""<|im_start|>system
You are the kind and tri-intelligent assistant helping people to understand complex concepts.<|im_end|>
<|im_start|>user
Can you explain the concept of eigenvalues and eigenvectors in a simple way?<|im_end|>
<|im_start|>assistant"""
]
# Generate responses for each prompt
for chat in prompts:
print(f"Prompt:\n{chat}\n")
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response:\n{response}\n{'-'*80}\n")
```
# **Key Adjustments**
1. **System Prompts:** Each prompt defines a different role or persona for the AI to adopt.
2. **User Prompts:** These specify the context or task for the assistant, ranging from teaching to storytelling or career advice.
3. **Looping Through Prompts:** Each prompt is processed in a loop to showcase the model's versatility.
You can expand the list of prompts to explore a variety of scenarios and responses.
# **Use Cases for T10B**
- Multilingual content generation
- Question answering and dialogue systems
- Text summarization and analysis
- Translation and localization tasks
# **Technical Details**
Triangulum 10B employs a state-of-the-art autoregressive architecture inspired by LLaMA. The optimized transformer framework ensures both efficiency and scalability, making it suitable for a variety of use cases.
# **How to Run Triangulum 10B on Ollama Locally**
```markdown
# How to Run Ollama Locally
This guide demonstrates the power of using open-source LLMs locally, showcasing examples with different open-source models for various use cases. By the end, you'll be equipped to run any future open-source LLM models with ease.
---
## Example 1: How to Run the Triangulum-10B Model
The **Triangulum-10B** model is an open-source LLM known for its capabilities across text-based tasks. We'll interact with it similarly to ChatGPT, but run it locally with support for quants.
### Step 1: Download the Model
First, download the **Triangulum-10B-F16.gguf** model using the following command:
```bash
ollama run triangulum-10b-f16.gguf
```
### Step 2: Model Initialization and Download
Upon running the command, Ollama will initialize and download the model files. You should see output similar to the following:
```plaintext
pulling manifest
pulling 8934d96d3f08... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 3.8 GB
pulling 8c17c2ebb0ea... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 7.0 KB
pulling 7c23fb36d801... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 4.8 KB
pulling 2e0493f67d0c... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 59 B
pulling fa304d675061... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 91 B
pulling 42ba7f8a01dd... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 557 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> Send a message (/? for help)
```
### Step 3: Interact with the Model
Once the model is ready, you can send a prompt. For example, let's ask:
```plaintext
>>> What can you do for me?
```
If you asked the same question, you should receive a response like this:
```plaintext
As a responsible AI language model, I am here to assist you with any questions or tasks you may have. Here are some examples of things I can help with:
1. Answering questions: I can provide information on a wide range of topics, from science and technology to history and culture.
2. Generating ideas: I can help you brainstorm ideas for creative projects, or provide suggestions for solving problems.
3. Writing assistance: I can help you with writing tasks such as proofreading, editing, and suggesting alternative words or phrases.
4. Translation: I can translate text from one language to another.
5. Summarizing content: I can summarize long pieces of text, such as articles or documents, into shorter, more digestible versions.
6. Creativity: I can help you generate creative ideas for stories, poems, or other forms of writing.
7. Language learning: I can assist you in learning a new language by providing grammar explanations, vocabulary lists, and practice exercises.
8. Chatting: I'm here to chat with you and provide a response to any question or topic you'd like to discuss.
Please let me know if there is anything specific you would like me to help you with.
```
### Step 4: Exit the Program
To exit the program, simply type:
```plaintext
/exit
```
## Example 2: Running Multi-Modal Models (Future Use)
Ollama supports running multi-modal models where you can send images and ask questions based on them. This section will be updated as more models become available.
## Notes on Using Quantized Models
Quantized models like **triangulum-10b-f16.gguf** are optimized for performance on resource-constrained hardware, making it accessible for local inference.
1. Ensure your system has sufficient VRAM or CPU resources.
2. Use the `.gguf` model format for compatibility with Ollama.
# **Conclusion**
Running the **Triangulum-10B** model with Ollama provides a robust way to leverage open-source LLMs locally for diverse use cases. By following these steps, you can explore the capabilities of other open-source models in the future. |
ojas-kool/llama-3-8b-Instruct-bnb-4bit-name-generator-GGUF | ojas-kool | "2024-06-09T12:38:35Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-09T12:34:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** ojas-kool
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
EnzoZacharias/starcoder-fine-tuned-plc_V1 | EnzoZacharias | "2023-09-21T09:41:57Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:finetune:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | "2023-09-21T09:20:41Z" | ---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-fine-tuned-plc_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-fine-tuned-plc_V1
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.1.0.dev20230823
- Datasets 2.14.4
- Tokenizers 0.13.3
|
boostcamp-5th-nlp07/qlora-koalpaca-polyglot-5.8b-fast | boostcamp-5th-nlp07 | "2023-07-10T13:29:43Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-10T13:29:38Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Subsets and Splits