modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-16 06:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-16 06:26:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf | RichardErkhov | "2025-03-21T14:39:09Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-21T14:16:11Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CulturaX-zh-unsupervised-half - GGUF
- Model creator: https://huggingface.co/autoprogrammer/
- Original model: https://huggingface.co/autoprogrammer/CulturaX-zh-unsupervised-half/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CulturaX-zh-unsupervised-half.Q2_K.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q2_K.gguf) | Q2_K | 0.54GB |
| [CulturaX-zh-unsupervised-half.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [CulturaX-zh-unsupervised-half.IQ3_S.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [CulturaX-zh-unsupervised-half.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [CulturaX-zh-unsupervised-half.IQ3_M.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [CulturaX-zh-unsupervised-half.Q3_K.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q3_K.gguf) | Q3_K | 0.64GB |
| [CulturaX-zh-unsupervised-half.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [CulturaX-zh-unsupervised-half.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [CulturaX-zh-unsupervised-half.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [CulturaX-zh-unsupervised-half.Q4_0.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q4_0.gguf) | Q4_0 | 0.72GB |
| [CulturaX-zh-unsupervised-half.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [CulturaX-zh-unsupervised-half.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [CulturaX-zh-unsupervised-half.Q4_K.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q4_K.gguf) | Q4_K | 0.75GB |
| [CulturaX-zh-unsupervised-half.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [CulturaX-zh-unsupervised-half.Q4_1.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q4_1.gguf) | Q4_1 | 0.77GB |
| [CulturaX-zh-unsupervised-half.Q5_0.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q5_0.gguf) | Q5_0 | 0.83GB |
| [CulturaX-zh-unsupervised-half.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [CulturaX-zh-unsupervised-half.Q5_K.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q5_K.gguf) | Q5_K | 0.85GB |
| [CulturaX-zh-unsupervised-half.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [CulturaX-zh-unsupervised-half.Q5_1.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q5_1.gguf) | Q5_1 | 0.89GB |
| [CulturaX-zh-unsupervised-half.Q6_K.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q6_K.gguf) | Q6_K | 0.95GB |
| [CulturaX-zh-unsupervised-half.Q8_0.gguf](https://huggingface.co/RichardErkhov/autoprogrammer_-_CulturaX-zh-unsupervised-half-gguf/blob/main/CulturaX-zh-unsupervised-half.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf | RichardErkhov | "2025-02-19T06:03:17Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-19T05:24:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-yi-1.5b-1k-float16 - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-yi-1.5b-1k-float16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-yi-1.5b-1k-float16.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q2_K.gguf) | Q2_K | 0.59GB |
| [asm2asm-yi-1.5b-1k-float16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.IQ3_XS.gguf) | IQ3_XS | 0.65GB |
| [asm2asm-yi-1.5b-1k-float16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.IQ3_S.gguf) | IQ3_S | 0.67GB |
| [asm2asm-yi-1.5b-1k-float16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q3_K_S.gguf) | Q3_K_S | 0.67GB |
| [asm2asm-yi-1.5b-1k-float16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.IQ3_M.gguf) | IQ3_M | 0.7GB |
| [asm2asm-yi-1.5b-1k-float16.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q3_K.gguf) | Q3_K | 0.73GB |
| [asm2asm-yi-1.5b-1k-float16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q3_K_M.gguf) | Q3_K_M | 0.73GB |
| [asm2asm-yi-1.5b-1k-float16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q3_K_L.gguf) | Q3_K_L | 0.77GB |
| [asm2asm-yi-1.5b-1k-float16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.IQ4_XS.gguf) | IQ4_XS | 0.78GB |
| [asm2asm-yi-1.5b-1k-float16.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q4_0.gguf) | Q4_0 | 0.81GB |
| [asm2asm-yi-1.5b-1k-float16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.IQ4_NL.gguf) | IQ4_NL | 0.81GB |
| [asm2asm-yi-1.5b-1k-float16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q4_K_S.gguf) | Q4_K_S | 0.84GB |
| [asm2asm-yi-1.5b-1k-float16.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q4_K.gguf) | Q4_K | 0.9GB |
| [asm2asm-yi-1.5b-1k-float16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q4_K_M.gguf) | Q4_K_M | 0.9GB |
| [asm2asm-yi-1.5b-1k-float16.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q4_1.gguf) | Q4_1 | 0.88GB |
| [asm2asm-yi-1.5b-1k-float16.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q5_0.gguf) | Q5_0 | 0.96GB |
| [asm2asm-yi-1.5b-1k-float16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q5_K_S.gguf) | Q5_K_S | 0.98GB |
| [asm2asm-yi-1.5b-1k-float16.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q5_K.gguf) | Q5_K | 1.02GB |
| [asm2asm-yi-1.5b-1k-float16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [asm2asm-yi-1.5b-1k-float16.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q5_1.gguf) | Q5_1 | 1.04GB |
| [asm2asm-yi-1.5b-1k-float16.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q6_K.gguf) | Q6_K | 1.19GB |
| [asm2asm-yi-1.5b-1k-float16.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-yi-1.5b-1k-float16-gguf/blob/main/asm2asm-yi-1.5b-1k-float16.Q8_0.gguf) | Q8_0 | 1.46GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: 01-ai/Yi-Coder-1.5B-Chat
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-yi-1.5b-1k-float16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-yi-1.5b-1k-float16
This model is a fine-tuned version of [01-ai/Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
jontromanab/cartPoleV1 | jontromanab | "2023-04-18T15:52:32Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-18T15:52:16Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 70.40 +/- 32.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dpreciado3/caramex-v7-sdxl | dpreciado3 | "2023-11-03T06:26:57Z" | 5 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-11-03T05:18:20Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: caramex
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - dpreciado3/caramex-v7-sdxl
Estos son los pesos de adaptación LoRA para stabilityai/stable-diffusion-xl-base-1.0. Los pesos fueron ajustados con la base de datos caramex. A continuación puedes encontrar algunas imágenes de ejemplo.




Se activó LoRA para el codificador de texto: Verdadero.
VAE especial usado para el entrenamiento: madebyollin/sdxl-vae-fp16-fix.
|
mradermacher/Qwen2.5-14B-Brocav5-GGUF | mradermacher | "2025-01-09T14:28:34Z" | 265 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CultriX/Qwen2.5-14B-Brocav5",
"base_model:quantized:CultriX/Qwen2.5-14B-Brocav5",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-09T13:32:17Z" | ---
base_model: CultriX/Qwen2.5-14B-Brocav5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/CultriX/Qwen2.5-14B-Brocav5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Brocav5-GGUF/resolve/main/Qwen2.5-14B-Brocav5.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF | mradermacher | "2024-11-14T08:54:26Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"base_model:rombodawg/EveryoneLLM-7b-Gemma-Base",
"base_model:quantized:rombodawg/EveryoneLLM-7b-Gemma-Base",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-11-14T05:22:37Z" | ---
base_model: rombodawg/EveryoneLLM-7b-Gemma-Base
language:
- en
library_name: transformers
license: other
license_link: https://ai.google.dev/gemma/terms
license_name: gemma-terms-of-use
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rombodawg/EveryoneLLM-7b-Gemma-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 4.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.1 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.1 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.1 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_0.gguf) | i1-Q4_0 | 5.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/EveryoneLLM-7b-Gemma-Base-i1-GGUF/resolve/main/EveryoneLLM-7b-Gemma-Base.i1-Q6_K.gguf) | i1-Q6_K | 7.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MatteoColavita/ppo-LunarLander-v2-U8 | MatteoColavita | "2023-08-15T21:16:22Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-15T18:35:06Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 206.54 +/- 77.16
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 150000
'learning_rate': 0.0025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'MatteoColavita/ppo-LunarLander-v2-U8'
'batch_size': 512
'minibatch_size': 128}
```
|
nhoxinh/e7caeb20-206c-4574-bd48-88f58e14d8ce | nhoxinh | "2025-01-29T04:58:41Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T04:40:52Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7caeb20-206c-4574-bd48-88f58e14d8ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/e7caeb20-206c-4574-bd48-88f58e14d8ce
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e7caeb20-206c-4574-bd48-88f58e14d8ce
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6898 | 0.7519 | 200 | 1.8589 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AIDA-UPM/MARTINI_enrich_BERTopic_chiefnerd | AIDA-UPM | "2025-01-13T18:22:17Z" | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2025-01-13T18:22:05Z" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# MARTINI_enrich_BERTopic_chiefnerd
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_chiefnerd")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 45
* Number of training documents: 5779
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | vaccinated - pfizer - myocarditis - doctors - 2021 | 20 | -1_vaccinated_pfizer_myocarditis_doctors |
| 0 | twitter - censorship - musk - taibbi - shareholders | 2904 | 0_twitter_censorship_musk_taibbi |
| 1 | ukraine - kremlin - zelensky - sanctions - blinken | 249 | 1_ukraine_kremlin_zelensky_sanctions |
| 2 | trudeau - ottawa - convoy - alberta - freedom | 187 | 2_trudeau_ottawa_convoy_alberta |
| 3 | died - defibrillator - footballer - bronny - sudden | 156 | 3_died_defibrillator_footballer_bronny |
| 4 | rogan - misinformation - podcast - cnn - meidastouch | 132 | 4_rogan_misinformation_podcast_cnn |
| 5 | fauci - coronaviruses - bats - laboratory - pentagon | 130 | 5_fauci_coronaviruses_bats_laboratory |
| 6 | mandates - unvaccinated - hochul - repeal - medicare | 125 | 6_mandates_unvaccinated_hochul_repeal |
| 7 | maricopa - ballots - karrin - deputies - pelosi | 113 | 7_maricopa_ballots_karrin_deputies |
| 8 | rfk - undebatable - propagandists - snowden - debbie | 103 | 8_rfk_undebatable_propagandists_snowden |
| 9 | ford - tesla - electricity - fuels - prices | 102 | 9_ford_tesla_electricity_fuels |
| 10 | transgender - virginia - mandates - born - schools | 99 | 10_transgender_virginia_mandates_born |
| 11 | scammers - deleted - telegram - spamming - subscribers | 82 | 11_scammers_deleted_telegram_spamming |
| 12 | illegals - migrant - border - biden - dhs | 78 | 12_illegals_migrant_border_biden |
| 13 | pfizer - whistleblower - lawsuit - falsified - jackson | 77 | 13_pfizer_whistleblower_lawsuit_falsified |
| 14 | vaccinated - omicron - hospitalizations - contagious - israel | 67 | 14_vaccinated_omicron_hospitalizations_contagious |
| 15 | noaa - co2 - gmo - greenland - globalism | 67 | 15_noaa_co2_gmo_greenland |
| 16 | scotus - overturned - abortions - filibuster - voted | 64 | 16_scotus_overturned_abortions_filibuster |
| 17 | worldcouncilforhealth - doctors - mccullough - denying - malpractice | 64 | 17_worldcouncilforhealth_doctors_mccullough_denying |
| 18 | fdic - yellen - depositors - collapse - blackrock | 62 | 18_fdic_yellen_depositors_collapse |
| 19 | myocarditis - myopericarditis - troponin - electrocardiogram - vaccination | 62 | 19_myocarditis_myopericarditis_troponin_electrocardiogram |
| 20 | pfizer - booster - injections - ages - updated | 60 | 20_pfizer_booster_injections_ages |
| 21 | hydroxychloroquine - ivermectin - paxlovid - remdesivir - molnupiravir | 55 | 21_hydroxychloroquine_ivermectin_paxlovid_remdesivir |
| 22 | mortuaries - 2022 - matthews - insurers - increase | 53 | 22_mortuaries_2022_matthews_insurers |
| 23 | bidens - fbi - dailymail - hacked - zuckerberg | 50 | 23_bidens_fbi_dailymail_hacked |
| 24 | vaccinated - nfl - durant - mvp - kevin | 49 | 24_vaccinated_nfl_durant_mvp |
| 25 | gates - billion - epstein - pandemics - donation | 48 | 25_gates_billion_epstein_pandemics |
| 26 | thimerosal - immunizing - autism - rfk - shingles | 47 | 26_thimerosal_immunizing_autism_rfk |
| 27 | miscarriages - menstruators - pcos - diethylstilbestrol - hysterectomies | 40 | 27_miscarriages_menstruators_pcos_diethylstilbestrol |
| 28 | ufo - airship - missile - landed - pentagon | 37 | 28_ufo_airship_missile_landed |
| 29 | shootings - sheriff - murdered - manhunt - maine | 33 | 29_shootings_sheriff_murdered_manhunt |
| 30 | monkeypox - leishmaniasis - symptoms - pustules - ghebreyesus | 31 | 30_monkeypox_leishmaniasis_symptoms_pustules |
| 31 | hannity - tucker - nielsen - newscast - viewers | 31 | 31_hannity_tucker_nielsen_newscast |
| 32 | impeachment - donald - testifying - epstein - defendant | 30 | 32_impeachment_donald_testifying_epstein |
| 33 | immunogenic - mrna - exosomes - sars - phagocytes | 30 | 33_immunogenic_mrna_exosomes_sars |
| 34 | fbi - whistleblowers - prosecuting - wray - christopher | 28 | 34_fbi_whistleblowers_prosecuting_wray |
| 35 | pfizer - revenue - billion - injectable - 529 | 27 | 35_pfizer_revenue_billion_injectable |
| 36 | vaers - reported - doses - 049 - shingles | 26 | 36_vaers_reported_doses_049 |
| 37 | clotting - thrombocytopenia - heparin - complications - astrazeneca | 26 | 37_clotting_thrombocytopenia_heparin_complications |
| 38 | mayor - immigrants - bronx - hochul - shelters | 24 | 38_mayor_immigrants_bronx_hochul |
| 39 | superfund - ohio - hazardous - derailed - tanker | 24 | 39_superfund_ohio_hazardous_derailed |
| 40 | budweiser - distributors - mulvaney - lite - downgraded | 23 | 40_budweiser_distributors_mulvaney_lite |
| 41 | biden - pandemic - vaccinate - announce - coordinator | 22 | 41_biden_pandemic_vaccinate_announce |
| 42 | omicron - mutated - virulent - subvariants - genomic | 22 | 42_omicron_mutated_virulent_subvariants |
| 43 | cdc - walensky - director - advised - proclamations | 20 | 43_cdc_walensky_director_advised |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
|
golesheed/whisper-native-children-1-dutch | golesheed | "2024-01-31T16:49:25Z" | 60 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-31T14:35:53Z" | ---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1478
- Wer: 7.7540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4174 | 0.38 | 30 | 0.1791 | 7.3336 |
| 0.1753 | 0.75 | 60 | 0.1559 | 6.8509 |
| 0.136 | 1.12 | 90 | 0.1470 | 5.9946 |
| 0.0743 | 1.5 | 120 | 0.1468 | 6.3605 |
| 0.0763 | 1.88 | 150 | 0.1360 | 5.6442 |
| 0.0476 | 2.25 | 180 | 0.1487 | 6.4617 |
| 0.0332 | 2.62 | 210 | 0.1415 | 7.0689 |
| 0.0338 | 3.0 | 240 | 0.1382 | 5.4807 |
| 0.0159 | 3.38 | 270 | 0.1454 | 8.5714 |
| 0.0153 | 3.75 | 300 | 0.1427 | 5.6442 |
| 0.0124 | 4.12 | 330 | 0.1437 | 6.3605 |
| 0.0071 | 4.5 | 360 | 0.1454 | 6.0802 |
| 0.0061 | 4.88 | 390 | 0.1478 | 7.7540 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
unsloth/granite-3.2-8b-instruct | unsloth | "2025-03-05T03:03:28Z" | 0 | 2 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"language",
"granite-3.2",
"ganite",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.2-8b-instruct",
"base_model:finetune:ibm-granite/granite-3.2-8b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-03-04T08:43:10Z" | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.2
- ganite
base_model:
- ibm-granite/granite-3.2-8b-instruct
---
# Granite-3.2-8B-Instruct
**Model Summary:**
Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: February 26th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
**Intended Use:**
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
**Capabilities**
* **Thinking**
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.2-8B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-8b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
```
**Example Outputs**
- thinking=True
```md
Here is my thought process:
1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3).
2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid.
3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid.
4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters.
5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture).
6. Solving this equation for x will give us the volume of 70% solution needed.
Here is my response:
To solve this, we need to set up an equation based on the concentration of acid before and after mixing.
1. **Initial Acid Content**:
- We start with 10 liters of a 30% acid solution.
- Pure acid in this = 10 * 0.3 = 3 liters.
2. **Variable for Addition**:
- Let x be the liters of 70% acid solution to be added.
- Pure acid in x liters of 70% solution = 0.7x liters.
3. **Final Mixture Concentration**:
- Total volume of mixture = 10 + x liters.
- Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution).
4. **Setting up the equation for 50% acid concentration**:
- We require that the ratio of pure acid to total volume is 0.5 (50%).
- Therefore, the equation is:
\[
\frac{3 + 0.7x}{10 + x} = 0.5
\]
5. **Solving the equation**:
Multiply both sides by (10 + x) to clear the fraction:
\[
3 + 0.7x = 0.5(10 + x)
\]
Expanding the right side:
\[
3 + 0.7x = 5 + 0.5x
\]
Subtract 0.5x from both sides:
\[
3 + 0.2x = 5
\]
Subtract 3 from both sides:
\[
0.2x = 2
\]
Divide by 0.2:
\[
x = 10
\]
So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture.
Let's verify:
- Initial acid: 3 liters
- Acid from added solution: 0.7 * 10 = 7 liters
- Total acid: 3 + 7 = 10 liters
- Total volume: 10 + 10 = 20 liters
- Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\)
This confirms our solution.
Final answer: 10 liters of 70% acid solution should be added.
```
- thinking=False
```md
Let's denote the amount of 70% acid solution to be added as x liters.
We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it.
We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid.
So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters.
The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation:
3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x)
Now, let's solve for x:
3 + 0.7x = 5 + 0.5x
0.7x - 0.5x = 5 - 3
0.2x = 2
x = 2 / 0.2
x = 10
So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture.
```
**Evaluation Results:**
<table>
<thead>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
<th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
<th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
<th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.77</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.7</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.84</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">68.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.63</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">73.20</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">23.3</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.17</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">57.11</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">20.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">54.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">18.68</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.45</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">75.26</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">63.59</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">24.86</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">34.51</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">57.18</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">20.56</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.8</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">52.27</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">21.12</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.02</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">80.13</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">73.39</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.23</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.2-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">55.25</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.19</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.04</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.92</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">64.77</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.95</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">81.65</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.72</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.31</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.42</td>
</tr>
</tbody></table>
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
<!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
**Infrastructure:**
We train Granite-3.2-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite-3.2-8B-Instruct builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant.
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` --> |
RioShiina/ELYZA-japanese-CodeLlama-7b-instruct-exl2 | RioShiina | "2023-11-27T01:19:30Z" | 3 | 0 | null | [
"ja",
"en",
"arxiv:2308.12950",
"arxiv:2307.09288",
"license:llama2",
"region:us"
] | null | "2023-11-25T09:02:05Z" | ---
license: llama2
language:
- ja
- en
---
**[4.0bpw](https://huggingface.co/rioshiina/ELYZA-japanese-CodeLlama-7b-instruct-exl2/tree/4.0bpw)**
**[6.0bpw](https://huggingface.co/rioshiina/ELYZA-japanese-CodeLlama-7b-instruct-exl2/tree/6.0bpw)**
**[8.0bpw](https://huggingface.co/rioshiina/ELYZA-japanese-CodeLlama-7b-instruct-exl2/tree/8.0bpw)**
# ELYZA-japanese-CodeLlama-7b-instruct-exl2
- Model creator: [elyza](https://huggingface.co/elyza)
- Original model: [ELYZA-japanese-CodeLlama-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b-instruct)
## Prompt template
```
[INST] <<SYS>>
あなたは誠実で優秀な日本人のアシスタントです。
<</SYS>>
{prompt}[/INST]
```
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### Citations
```tex
@misc{elyzacodellama2023,
title={ELYZA-japanese-CodeLlama-7b},
url={https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
```tex
@misc{rozière2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
dianamihalache27/results6 | dianamihalache27 | "2024-04-15T13:04:11Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS",
"base_model:finetune:jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-13T13:21:30Z" | ---
license: apache-2.0
base_model: jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: results6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results6
This model is a fine-tuned version of [jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS](https://huggingface.co/jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6308
- Accuracy: 0.6945
- F1: 0.3537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mradermacher/L3.1-Sthenorm-8B-GGUF | mradermacher | "2024-08-28T12:16:10Z" | 26 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama",
"en",
"base_model:v000000/L3.1-Sthenorm-8B",
"base_model:quantized:v000000/L3.1-Sthenorm-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-28T10:11:31Z" | ---
base_model: v000000/L3.1-Sthenorm-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/v000000/L3.1-Sthenorm-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF/resolve/main/L3.1-Sthenorm-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
denbeo/87b3569f-fbbe-4a75-9c2d-1f7fa9e9466f | denbeo | "2025-01-21T13:22:33Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T12:48:56Z" | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 87b3569f-fbbe-4a75-9c2d-1f7fa9e9466f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b89b19edbaad8f4d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b89b19edbaad8f4d_train_data.json
type:
field_instruction: category
field_output: prompt_en
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/87b3569f-fbbe-4a75-9c2d-1f7fa9e9466f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b89b19edbaad8f4d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e9e2dbe-8c56-4ea8-b4a0-348574a0bc46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e9e2dbe-8c56-4ea8-b4a0-348574a0bc46
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 87b3569f-fbbe-4a75-9c2d-1f7fa9e9466f
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3967 | 0.0474 | 200 | 0.3808 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gpandrad/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_toad | gpandrad | "2025-03-28T23:40:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-28T23:38:50Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_toad
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_toad
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gpandrad/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_toad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mjwong/e5-large-mnli-anli | mjwong | "2024-04-23T14:27:25Z" | 125 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"dataset:glue",
"dataset:facebook/anli",
"arxiv:2212.03533",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2023-05-10T15:20:50Z" | ---
language:
- en
license: mit
datasets:
- glue
- facebook/anli
pipeline_tag: zero-shot-classification
model-index:
- name: e5-large-mnli-anli
results: []
---
# e5-large-mnli-anli
This model is a fine-tuned version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large) on the glue (mnli) and anli dataset.
## Model description
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
## How to use the model
### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="mjwong/e5-large-mnli-anli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
```
### With manual PyTorch
The model can also be applied on NLI tasks like so:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# device = "cuda:0" or "cpu"
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "mjwong/e5-large-mnli-anli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "But I thought you'd sworn off coffee."
hypothesis = "I thought that you vowed to drink more coffee."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 2) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Eval results
The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy.
|Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3|
| :---: | :---: | :---: | :---: | :---: | :---: |
|[e5-base-v2-mnli-anli](https://huggingface.co/mjwong/e5-base-v2-mnli-anli)|0.812|0.809|0.557|0.460|0.448|
|[e5-large-mnli](https://huggingface.co/mjwong/e5-large-mnli)|0.868|0.869|0.301|0.296|0.294|
|[e5-large-mnli-anli](https://huggingface.co/mjwong/e5-large-mnli-anli)|0.843|0.848|0.646|0.484|0.458|
|[e5-large-v2-mnli](https://huggingface.co/mjwong/e5-large-v2-mnli)|0.875|0.876|0.354|0.298|0.313|
|[e5-large-v2-mnli-anli](https://huggingface.co/mjwong/e5-large-v2-mnli-anli)|0.846|0.848|0.638|0.474|0.479|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.11.0
- Tokenizers 0.12.1
|
PrunaAI/microsoft-phi-1_5-bnb-4bit-smashed | PrunaAI | "2025-04-04T05:21:52Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-03T10:55:49Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/microsoft-phi-1_5-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
NielsPeng/sd-class-butterflies-32 | NielsPeng | "2023-02-23T02:04:10Z" | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2023-02-23T02:03:48Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('NielsPeng/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
yseop/FNP_T5_D2T_complete | yseop | "2021-09-06T20:54:21Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | # T5-base data to text model specialized for Finance NLG
__complete version__
----
## Usage (HuggingFace Transformers)
#### Call the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_complete")
model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_complete")
text = ["Group profit | valIs | € 115.7 million && € 115.7 million | dTime | in 2019"]
```
#### Choose a generation method
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
p = 0.82
k = 90
outputs = model.generate(input_ids,
do_sample=True,
top_p=p,
top_k=k,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
outputs = model.generate(input_ids,
max_length=200,
num_beams=2, repetition_penalty=2.5,
top_k=50, top_p=0.98,
length_penalty=1.0,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation. |
Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF | Triangle104 | "2025-03-18T00:40:47Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Cydonia-24B-v2.1",
"base_model:quantized:TheDrummer/Cydonia-24B-v2.1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-17T23:50:37Z" | ---
base_model: TheDrummer/Cydonia-24B-v2.1
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF
This model was converted to GGUF format from [`TheDrummer/Cydonia-24B-v2.1`](https://huggingface.co/TheDrummer/Cydonia-24B-v2.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheDrummer/Cydonia-24B-v2.1) for more details on the model.
---
Supported Chat Templates
-
Mistral v7 Tekken (recommended)
Metharme (may require some patching)
Alpaca (worth a try for story)
Description
-
Cydonia 24B v2.1 is a finetune of Mistral's latest 'Small' model (2501).
Further tuning was done to improve prose, foster creativity, and tone down positivity.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF --hf-file cydonia-24b-v2.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF --hf-file cydonia-24b-v2.1-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF --hf-file cydonia-24b-v2.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-24B-v2.1-Q4_K_S-GGUF --hf-file cydonia-24b-v2.1-q4_k_s.gguf -c 2048
```
|
cramade/xlm-roberta-base-finetuned-panx-de | cramade | "2023-12-26T05:45:03Z" | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-26T02:13:42Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.256 | 1.0 | 525 | 0.1500 | 0.8356 |
| 0.1285 | 2.0 | 1050 | 0.1385 | 0.8484 |
| 0.0811 | 3.0 | 1575 | 0.1339 | 0.8643 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
chauhoang/02fc80c7-0159-42fa-92f2-0a81d154c016 | chauhoang | "2025-01-13T23:06:34Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | "2025-01-13T22:23:23Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02fc80c7-0159-42fa-92f2-0a81d154c016
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 78bb6029154b2d72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/78bb6029154b2d72_train_data.json
type:
field_input: choices
field_instruction: context
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/02fc80c7-0159-42fa-92f2-0a81d154c016
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/78bb6029154b2d72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f893b1be-d7a3-43d3-80cd-dc515ac50ee3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f893b1be-d7a3-43d3-80cd-dc515ac50ee3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 02fc80c7-0159-42fa-92f2-0a81d154c016
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 5.2597 |
| 3.5509 | 0.0014 | 10 | 2.1967 |
| 1.8599 | 0.0029 | 20 | 1.5474 |
| 1.5111 | 0.0043 | 30 | 1.3782 |
| 1.2988 | 0.0058 | 40 | 1.2931 |
| 1.4394 | 0.0072 | 50 | 1.2767 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
julian-schelb/roberta-base-peft-p-tuning | julian-schelb | "2023-10-18T15:56:42Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-18T14:13:49Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Yhyu13/phi-2-sft-dpo-gpt4_en-ep1-lora | Yhyu13 | "2023-12-22T17:39:34Z" | 4 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1",
"base_model:adapter:Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1",
"license:other",
"region:us"
] | null | "2023-12-22T17:33:56Z" | ---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1
model-index:
- name: phi-2-sft-alpaca_gpt4_en-ep1-dpo-comparison_gpt4_en-ep1-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-sft-alpaca_gpt4_en-ep1-dpo-comparison_gpt4_en-ep1-lora
This model is a fine-tuned version of [Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1) on the comparison_gpt4_en dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0168
- Rewards/chosen: -1.5750
- Rewards/rejected: -11.4002
- Rewards/accuracies: 0.9956
- Rewards/margins: 9.8253
- Logps/rejected: -142.2352
- Logps/chosen: -139.5300
- Logits/rejected: 0.6066
- Logits/chosen: 0.9744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0534 | 0.24 | 1000 | 0.0217 | -1.6714 | -10.2359 | 0.9945 | 8.5645 | -130.5921 | -140.4941 | 0.3064 | 0.5735 |
| 0.0182 | 0.49 | 2000 | 0.0175 | -1.5469 | -10.9602 | 0.9951 | 9.4133 | -137.8349 | -139.2487 | 0.6230 | 1.0709 |
| 0.0162 | 0.73 | 3000 | 0.0171 | -1.5517 | -11.4444 | 0.9962 | 9.8927 | -142.6772 | -139.2976 | 0.6325 | 1.0048 |
| 0.0154 | 0.98 | 4000 | 0.0168 | -1.5741 | -11.4004 | 0.9956 | 9.8262 | -142.2364 | -139.5214 | 0.6051 | 0.9729 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0 |
tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF | tensorblock | "2024-11-16T01:18:24Z" | 85 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES",
"base_model:quantized:gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES",
"license:llama3.1",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-12T14:47:58Z" | ---
license: llama3.1
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
base_model: gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
model-index:
- name: Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 45.51
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.59
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.76
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES - GGUF
This repo contains GGUF format model files for [gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES](https://huggingface.co/gaverfraxz/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q2_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_L.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_K_S.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_K_M.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q6_K.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q8_0.gguf](https://huggingface.co/tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF/blob/main/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF --include "Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Meta-Llama-3.1-8B-Instruct-HalfAbliterated-TIES-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
exala/db_mc_2.0.1 | exala | "2024-08-31T17:31:42Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-31T17:31:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/momergul-babylm-student-gpt2-eval-bnb-8bit-smashed | PrunaAI | "2024-12-30T08:06:21Z" | 5 | 0 | null | [
"safetensors",
"gpt2",
"pruna-ai",
"base_model:momergul/babylm-student-gpt2-eval",
"base_model:quantized:momergul/babylm-student-gpt2-eval",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-12-30T08:06:06Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: momergul/babylm-student-gpt2-eval
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo momergul/babylm-student-gpt2-eval installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/momergul-babylm-student-gpt2-eval-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("momergul/babylm-student-gpt2-eval")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model momergul/babylm-student-gpt2-eval before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF | mradermacher | "2024-08-02T09:53:18Z" | 8 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/MahouMix-v1",
"base_model:flammenai/Mahou-1.3c-mistral-7B",
"base_model:quantized:flammenai/Mahou-1.3c-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-07-05T06:27:13Z" | ---
base_model: flammenai/Mahou-1.3c-mistral-7B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/MahouMix-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/flammenai/Mahou-1.3c-mistral-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3c-mistral-7B-i1-GGUF/resolve/main/Mahou-1.3c-mistral-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
axonide/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_hairy_hedgehog | axonide | "2025-04-13T13:49:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fierce hairy hedgehog",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-13T04:20:30Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_hairy_hedgehog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fierce hairy hedgehog
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_hairy_hedgehog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="axonide/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_hairy_hedgehog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hfl/Qwen2.5-VL-7B-Instruct-GPTQ-Int3 | hfl | "2025-03-20T09:07:58Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"conversational",
"zh",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | image-text-to-text | "2025-03-20T08:52:41Z" | ---
license: apache-2.0
language:
- zh
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
# Qwen2.5-VL-7B-Instruct-GPTQ-Int3
This is an **UNOFFICIAL** GPTQ-Int3 quantized version of the `Qwen2.5-VL` model using `gptqmodel` library.
The model is compatible with the latest `transformers` library (which can run non-quantized Qwen2.5-VL models).
### Performance
| Model | Size (Disk) | ChartQA (test) | OCRBench |
| ------------------------------------------------------------ | :---------: | :------------: | :------: |
| [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | 7.1 GB | 83.48 | 791 |
| [Qwen2.5-VL-3B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct-AWQ) | 3.2 GB | 82.52 | 786 |
| [**Qwen2.5-VL-3B-Instruct-GPTQ-Int4**](https://huggingface.co/hfl/Qwen2.5-VL-3B-Instruct-GPTQ-Int4) | 3.2 GB | 82.56 | 784 |
| [**Qwen2.5-VL-3B-Instruct-GPTQ-Int3**](https://huggingface.co/hfl/Qwen2.5-VL-3B-Instruct-GPTQ-Int3) | 2.9 GB | 76.68 | 742 |
| [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | 16.0 GB | 83.2 | 846 |
| [Qwen2.5-VL-7B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ) | 6.5 GB | 79.68 | 837 |
| [**Qwen2.5-VL-7B-Instruct-GPTQ-Int4**](https://huggingface.co/hfl/Qwen2.5-VL-7B-Instruct-GPTQ-Int4) | 6.5 GB | 81.48 | 845 |
| [**Qwen2.5-VL-7B-Instruct-GPTQ-Int3**](https://huggingface.co/hfl/Qwen2.5-VL-7B-Instruct-GPTQ-Int3) | 5.8 GB | 78.56 | 823 |
#### Note
- Evaluations are performed using [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) with default setting.
- GPTQ models are computationally more effective (fewer VRAM usage, faster inference speed) than AWQ series in these evaluations.
- We recommend use `gptqmodel` instead of `autogptq` library, as `autogptq` is no longer maintained.
### Quick Tour
Install the required libraries:
```
pip install git+https://github.com/huggingface/transformers accelerate qwen-vl-utils
pip install git+https://github.com/huggingface/optimum.git
pip install gptqmodel
```
Optionally, you may need to install:
```
pip install tokenicer device_smi logbar
```
Sample code:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"hfl/Qwen2.5-VL-3B-Instruct-GPTQ-Int4",
attn_implementation="flash_attention_2",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("hfl/Qwen2.5-VL-3B-Instruct-GPTQ-Int4")
messages = [{
"role": "user",
"content": [
{"type": "image", "image": "https://raw.githubusercontent.com/ymcui/Chinese-LLaMA-Alpaca-3/refs/heads/main/pics/banner.png"},
{"type": "text", "text": "请你描述一下这张图片。"},
],
}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text], images=image_inputs, videos=video_inputs,
padding=True, return_tensors="pt",
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(output_text[0])
```
Response:
> 这张图片展示了一个中文和英文的标志,内容为“中文LLaMA & Alpaca大模型”和“Chinese LLaMA & Alpaca Large Language Models”。标志左侧有两个卡通形象,一个是红色围巾的羊驼,另一个是白色毛发的羊驼,背景是一个绿色的草地和一座红色屋顶的建筑。标志右侧有一个数字3,旁边有一些电路图案。整体设计简洁明了,使用了明亮的颜色和可爱的卡通形象来吸引注意力。
### Disclaimer
- **This is NOT an official model by Qwen. Use at your own risk.**
- For detailed usage, please check [Qwen2.5-VL's page](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). |
tuanna08go/54c6a4b3-2a77-4c26-b60a-ca6e562597f3 | tuanna08go | "2025-01-22T03:27:21Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T03:10:03Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54c6a4b3-2a77-4c26-b60a-ca6e562597f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8e5d5a448d07cb2c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8e5d5a448d07cb2c_train_data.json
type:
field_instruction: prompt
field_output: model_1_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/54c6a4b3-2a77-4c26-b60a-ca6e562597f3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/8e5d5a448d07cb2c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2b2679ff-0f61-4240-8ba0-0e8ab0ad1ffa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2b2679ff-0f61-4240-8ba0-0e8ab0ad1ffa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 54c6a4b3-2a77-4c26-b60a-ca6e562597f3
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.5419 |
| 1.299 | 0.0026 | 10 | 1.4425 |
| 1.1751 | 0.0053 | 20 | 1.3019 |
| 1.1198 | 0.0079 | 30 | 1.2487 |
| 1.1563 | 0.0106 | 40 | 1.2300 |
| 1.1969 | 0.0132 | 50 | 1.2260 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
toastloaf/autotrain-dwxgy-mutlw1 | toastloaf | "2024-11-12T11:20:50Z" | 137 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilellm",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"custom_code",
"dataset:toastloaf/testing-private",
"base_model:facebook/MobileLLM-125M",
"base_model:finetune:facebook/MobileLLM-125M",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-11-12T10:48:43Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
base_model: facebook/MobileLLM-125M
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- toastloaf/testing-private
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Arodrigo/ClasificadorMotivoMoraDistilEspanol | Arodrigo | "2024-05-10T14:06:02Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-18T22:08:56Z" | ---
base_model: dccuchile/distilbert-base-spanish-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ClasificadorMotivoMoraDistilEspanol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClasificadorMotivoMoraDistilEspanol
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1390
- Accuracy: 0.3156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.293 | 1.0 | 240 | 2.4308 | 0.1860 |
| 1.1898 | 2.0 | 480 | 2.3413 | 0.2458 |
| 1.0891 | 3.0 | 720 | 2.2071 | 0.3123 |
| 1.0254 | 4.0 | 960 | 2.1390 | 0.3156 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mateiaassAI/teacher_moroco | mateiaassAI | "2024-11-03T20:49:29Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:moroco",
"base_model:dumitrescustefan/bert-base-romanian-cased-v1",
"base_model:finetune:dumitrescustefan/bert-base-romanian-cased-v1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-22T19:28:12Z" | ---
library_name: transformers
license: mit
base_model: dumitrescustefan/bert-base-romanian-cased-v1
tags:
- generated_from_trainer
datasets:
- moroco
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: teacher_moroco
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: moroco
type: moroco
config: moroco
split: validation
args: moroco
metrics:
- name: F1
type: f1
value: 0.8683940771433114
- name: Accuracy
type: accuracy
value: 0.8485053200472893
- name: Precision
type: precision
value: 0.8698117604818486
- name: Recall
type: recall
value: 0.8672521533524743
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher_moroco
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) on the moroco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1075
- F1: 0.8684
- Roc Auc: 0.9149
- Accuracy: 0.8485
- Precision: 0.8698
- Recall: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:---------:|:------:|
| 0.1332 | 1.0 | 1358 | 0.1120 | 0.8630 | 0.9075 | 0.8428 | 0.8822 | 0.8456 |
| 0.0935 | 2.0 | 2716 | 0.1075 | 0.8684 | 0.9149 | 0.8485 | 0.8698 | 0.8673 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
arishiki/Llama-2-7b-hf-quantized-gptq-8g01 | arishiki | "2025-01-04T04:29:30Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2025-01-04T04:24:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso16/2c3129d5-b3f1-40c4-b40b-690838e31535 | lesso16 | "2025-03-19T11:26:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-03-19T09:26:52Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2c3129d5-b3f1-40c4-b40b-690838e31535
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b79355fa48fcc5b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b79355fa48fcc5b6_train_data.json
type:
field_instruction: input
field_output: reference_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso16/2c3129d5-b3f1-40c4-b40b-690838e31535
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000216
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/b79355fa48fcc5b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 160
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bd3bb9d2-209a-4074-afb5-62ca89670e1b
wandb_project: 16a
wandb_run: your_name
wandb_runid: bd3bb9d2-209a-4074-afb5-62ca89670e1b
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2c3129d5-b3f1-40c4-b40b-690838e31535
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000216
- train_batch_size: 4
- eval_batch_size: 4
- seed: 160
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.3361 |
| 0.0495 | 0.2232 | 500 | 0.0044 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ginic/gender_split_30_female_1_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | "2025-01-06T20:56:58Z" | 5 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2025-01-06T20:56:22Z" |
---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus.
For details about specific model parameters, please view the config.json here or
training scripts in the scripts/buckeye_experiments folder of the GitHub repository.
# Experiment Details
Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds.
Goals:
- Determine how different in gender split in training data affects performance
Params to vary:
- percent female (--percent_female) [0.3, 0.7]
- training seed (--train_seed)
|
anilguleroglu/trained_llama_model_with_gqa | anilguleroglu | "2024-08-16T12:09:24Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-16T12:03:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
drean26/omost-llama-3-8b-Q4_K_M-GGUF | drean26 | "2024-11-23T15:08:06Z" | 8 | 2 | null | [
"gguf",
"pytorch",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:lllyasviel/omost-llama-3-8b",
"base_model:quantized:lllyasviel/omost-llama-3-8b",
"region:us",
"conversational"
] | null | "2024-11-23T15:07:36Z" | ---
tags:
- pytorch
- trl
- sft
- llama-cpp
- gguf-my-repo
inference: false
base_model: lllyasviel/omost-llama-3-8b
---
# drean26/omost-llama-3-8b-Q4_K_M-GGUF
This model was converted to GGUF format from [`lllyasviel/omost-llama-3-8b`](https://huggingface.co/lllyasviel/omost-llama-3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lllyasviel/omost-llama-3-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo drean26/omost-llama-3-8b-Q4_K_M-GGUF --hf-file omost-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo drean26/omost-llama-3-8b-Q4_K_M-GGUF --hf-file omost-llama-3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo drean26/omost-llama-3-8b-Q4_K_M-GGUF --hf-file omost-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo drean26/omost-llama-3-8b-Q4_K_M-GGUF --hf-file omost-llama-3-8b-q4_k_m.gguf -c 2048
```
|
aleegis12/d8e4c1f8-b817-4374-b16a-6687c244610d | aleegis12 | "2025-02-05T10:36:39Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T10:19:20Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d8e4c1f8-b817-4374-b16a-6687c244610d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 95518e2e6038786f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/95518e2e6038786f_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/d8e4c1f8-b817-4374-b16a-6687c244610d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/95518e2e6038786f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 30c157cc-803b-4696-a2b7-5df50ea255c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 30c157cc-803b-4696-a2b7-5df50ea255c5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d8e4c1f8-b817-4374-b16a-6687c244610d
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 199
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9346 | 0.0151 | 1 | 1.0062 |
| 0.7904 | 0.7547 | 50 | 0.7883 |
| 0.7149 | 1.5094 | 100 | 0.7552 |
| 0.6665 | 2.2642 | 150 | 0.7408 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RWKV/rwkv-raven-7b | RWKV | "2023-05-15T10:09:24Z" | 1,706 | 18 | transformers | [
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-05T12:50:19Z" | ---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 7B parameters chat version (Raven)
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv).
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/) |
dariast/FacebookAI_roberta-base_custom_data | dariast | "2025-01-17T10:57:45Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-17T10:51:28Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FacebookAI_roberta-base_custom_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FacebookAI_roberta-base_custom_data
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3722
- Precision Macro: 0.8399
- Recall Macro: 0.8127
- F1 Macro: 0.8177
- Accuracy: 0.8265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Macro | Recall Macro | F1 Macro | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------:|:--------:|:--------:|
| 0.4766 | 1.0 | 270 | 0.3801 | 0.8110 | 0.8230 | 0.8160 | 0.8089 |
| 0.3689 | 2.0 | 540 | 0.3722 | 0.8399 | 0.8127 | 0.8177 | 0.8265 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
XelotX/Midnight-Miqu-70B-v1.5-XelotX-iQuants | XelotX | "2024-07-12T16:40:31Z" | 69 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:sophosympatheia/Midnight-Miqu-70B-v1.5",
"base_model:quantized:sophosympatheia/Midnight-Miqu-70B-v1.5",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-07-12T16:40:30Z" | ---
base_model: sophosympatheia/Midnight-Miqu-70B-v1.5
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
weighted/imatrix quants of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-70B-v1.5.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thaaaarun/ppo-LunarLander-v2 | thaaaarun | "2023-08-04T00:30:51Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-04T00:30:34Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.90 +/- 16.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Prahas10/roof-test | Prahas10 | "2024-04-19T17:29:21Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch32-384",
"base_model:finetune:google/vit-base-patch32-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-04-19T16:48:10Z" | ---
license: apache-2.0
base_model: google/vit-base-patch32-384
tags:
- generated_from_keras_callback
model-index:
- name: Prahas10/roof-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Prahas10/roof-test
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0637
- Validation Loss: 0.1264
- Train Accuracy: 0.9474
- Epoch: 28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 3990, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.6939 | 2.4863 | 0.2807 | 0 |
| 2.1820 | 2.2454 | 0.4912 | 1 |
| 1.8026 | 1.8798 | 0.4912 | 2 |
| 1.4641 | 1.6673 | 0.5439 | 3 |
| 1.1288 | 1.3594 | 0.6842 | 4 |
| 0.9426 | 1.0517 | 0.8070 | 5 |
| 0.6577 | 0.8531 | 0.8421 | 6 |
| 0.5025 | 0.6971 | 0.8772 | 7 |
| 0.3976 | 0.5785 | 0.8596 | 8 |
| 0.3052 | 0.5568 | 0.9123 | 9 |
| 0.2562 | 0.5137 | 0.8947 | 10 |
| 0.3250 | 0.4415 | 0.9298 | 11 |
| 0.2773 | 0.8003 | 0.7368 | 12 |
| 0.2694 | 0.4544 | 0.8421 | 13 |
| 0.2180 | 0.5179 | 0.8947 | 14 |
| 0.1515 | 0.3450 | 0.9825 | 15 |
| 0.1386 | 0.2818 | 0.9825 | 16 |
| 0.1058 | 0.1962 | 0.9649 | 17 |
| 0.0724 | 0.2456 | 0.9825 | 18 |
| 0.0604 | 0.2432 | 0.9649 | 19 |
| 0.0718 | 0.2548 | 1.0 | 20 |
| 0.0507 | 0.2760 | 0.9474 | 21 |
| 0.0453 | 0.1565 | 0.9825 | 22 |
| 0.0274 | 0.1377 | 0.9825 | 23 |
| 0.0396 | 0.1906 | 0.9649 | 24 |
| 0.0360 | 0.1217 | 0.9825 | 25 |
| 0.0307 | 0.2234 | 0.9474 | 26 |
| 0.0427 | 0.2861 | 0.9298 | 27 |
| 0.0637 | 0.1264 | 0.9474 | 28 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
dzanbek/e50225d2-9dde-4b0d-8c92-8b2d001d4781 | dzanbek | "2025-01-25T13:50:19Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-64k",
"region:us"
] | null | "2025-01-25T11:34:54Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e50225d2-9dde-4b0d-8c92-8b2d001d4781
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f1155bae8b794cc3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f1155bae8b794cc3_train_data.json
type:
field_input: distraction
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dzanbek/e50225d2-9dde-4b0d-8c92-8b2d001d4781
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/f1155bae8b794cc3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd79a801-608c-48db-b19e-9dab94dd438a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd79a801-608c-48db-b19e-9dab94dd438a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e50225d2-9dde-4b0d-8c92-8b2d001d4781
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.3513 |
| 13.1248 | 0.0003 | 5 | 3.3545 |
| 12.1221 | 0.0007 | 10 | 2.5851 |
| 10.1528 | 0.0010 | 15 | 2.4319 |
| 9.6282 | 0.0014 | 20 | 2.3833 |
| 9.8741 | 0.0017 | 25 | 2.3634 |
| 9.3623 | 0.0020 | 30 | 2.3594 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlx-community/quantized-gemma-7b-it | mlx-community | "2024-02-22T12:38:22Z" | 286 | 6 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mlx",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-22T11:46:02Z" | ---
license: other
library_name: transformers
tags:
- mlx
widget:
- text: '<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
'
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# mlx-community/quantized-gemma-7b-it
This model was converted to MLX format from [`google/gemma-7b-it`]().
Refer to the [original model card](https://huggingface.co/google/gemma-7b-it) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/quantized-gemma-7b-it")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
terzimert/bert-finetuned-ner-v4.012 | terzimert | "2023-04-03T11:39:15Z" | 121 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:caner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-04-03T11:14:56Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- caner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-v4.012
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: caner
type: caner
config: default
split: train[67%:68%]
args: default
metrics:
- name: Precision
type: precision
value: 0.7985739750445633
- name: Recall
type: recall
value: 0.8373831775700935
- name: F1
type: f1
value: 0.8175182481751825
- name: Accuracy
type: accuracy
value: 0.9538148524923703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-v4.012
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the caner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2304
- Precision: 0.7986
- Recall: 0.8374
- F1: 0.8175
- Accuracy: 0.9538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2776 | 1.0 | 3228 | 0.3328 | 0.7988 | 0.7720 | 0.7852 | 0.9406 |
| 0.1617 | 2.0 | 6456 | 0.2514 | 0.8240 | 0.8224 | 0.8232 | 0.9591 |
| 0.1266 | 3.0 | 9684 | 0.2304 | 0.7986 | 0.8374 | 0.8175 | 0.9538 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
waifu-research-department/Inoue-Takina | waifu-research-department | "2022-10-08T21:50:38Z" | 0 | 3 | null | [
"license:mit",
"region:us"
] | null | "2022-09-28T23:26:48Z" | ---
license: mit
---
# Description
Trainer: ChrisC
Takina from Lycoris Recoil
# Dataset
>Training: 25 images
>Regularization: 400 images
# Info
>takina_3k_WD1-3.ckpt
>Model Used: Waifu Diffusion 1.3
>Steps: 3000
>Keyword: takina inoue (Use this in the prompt)
>Class Phrase: lycoris_recoil
> Takina_3k.ckpt is based on Waifu Diffusion 1.2 (Keyword: takina) |
pucpr/clinicalnerpt-healthcare | pucpr | "2021-10-13T09:32:28Z" | 6 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
language: "pt"
widget:
- text: "Acompanhamento da diabetes, paciente encaminhado da unidade de saúde."
- text: "Paciente encaminhado por alteração na função renal."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - HealthCare
The HealthCare NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
Helsinki-NLP/opus-mt-zh-de | Helsinki-NLP | "2023-08-16T12:09:09Z" | 346 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- zh
- de
tags:
- translation
license: apache-2.0
---
### zho-deu
* source group: Chinese
* target group: German
* OPUS readme: [zho-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh_Hani wuu_Hani yue_Hani
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.deu | 32.1 | 0.522 |
### System Info:
- hf_name: zho-deu
- source_languages: zho
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'de']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: deu
- short_pair: zh-de
- chrF2_score: 0.522
- bleu: 32.1
- brevity_penalty: 0.9540000000000001
- ref_len: 19102.0
- src_name: Chinese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: de
- prefer_old: False
- long_pair: zho-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
bowilleatyou/408e64a0-69c6-4753-8fcb-4a130ca4cebe | bowilleatyou | "2025-04-10T18:24:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-10T10:34:58Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf | RichardErkhov | "2025-04-12T20:17:45Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-12T15:54:27Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Model_Kinship_2-2 - GGUF
- Model creator: https://huggingface.co/PotatoB/
- Original model: https://huggingface.co/PotatoB/Model_Kinship_2-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Model_Kinship_2-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q2_K.gguf) | Q2_K | 2.53GB |
| [Model_Kinship_2-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Model_Kinship_2-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Model_Kinship_2-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Model_Kinship_2-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Model_Kinship_2-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q3_K.gguf) | Q3_K | 3.28GB |
| [Model_Kinship_2-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Model_Kinship_2-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Model_Kinship_2-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Model_Kinship_2-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Model_Kinship_2-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Model_Kinship_2-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Model_Kinship_2-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q4_K.gguf) | Q4_K | 4.07GB |
| [Model_Kinship_2-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Model_Kinship_2-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Model_Kinship_2-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Model_Kinship_2-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Model_Kinship_2-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q5_K.gguf) | Q5_K | 4.78GB |
| [Model_Kinship_2-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Model_Kinship_2-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Model_Kinship_2-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q6_K.gguf) | Q6_K | 5.53GB |
| [Model_Kinship_2-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/PotatoB_-_Model_Kinship_2-2-gguf/blob/main/Model_Kinship_2-2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
---
This is an open model for iterative merging experiments.
|
pinzhenchen/sft-lora-es-pythia-6b9 | pinzhenchen | "2024-03-05T23:53:35Z" | 0 | 0 | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-03-05T23:53:31Z" |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
galbitang/autotrain-table_style_classification2-94510146124 | galbitang | "2023-10-13T16:21:53Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:galbitang/autotrain-data-table_style_classification2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-10-11T20:09:12Z" | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- galbitang/autotrain-data-table_style_classification2
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.07864701245065923
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 94510146124
- CO2 Emissions (in grams): 0.0786
## Validation Metrics
- Loss: 0.806
- Accuracy: 0.766
- Macro F1: 0.683
- Micro F1: 0.766
- Weighted F1: 0.750
- Macro Precision: 0.710
- Micro Precision: 0.766
- Weighted Precision: 0.744
- Macro Recall: 0.676
- Micro Recall: 0.766
- Weighted Recall: 0.766 |
anupkashyap7/mistral-sql-coder | anupkashyap7 | "2024-03-27T20:51:01Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"mistralai/Mistral-7B-v0.1",
"defog/sqlcoder-7b-2",
"base_model:defog/sqlcoder-7b-2",
"base_model:merge:defog/sqlcoder-7b-2",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-27T20:11:58Z" | ---
tags:
- merge
- mergekit
- mistralai/Mistral-7B-v0.1
- defog/sqlcoder-7b-2
base_model:
- mistralai/Mistral-7B-v0.1
- defog/sqlcoder-7b-2
---
# mistral-sql-coder
mistral-sql-coder is a merge of the following models:
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
parameters:
density: 0.5
weight: 1.0
- model: defog/sqlcoder-7b-2
parameters:
density: 0.33
weight: 1.0
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "anupkashyap7/mistral-sql-coder"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
NbAiLab/nb-llama-3.1-8B-sft | NbAiLab | "2024-12-11T09:18:22Z" | 9 | 0 | null | [
"safetensors",
"llama",
"norwegian",
"bokmål",
"nynorsk",
"swedish",
"danish",
"multilingual",
"text-generation",
"conversational",
"no",
"nb",
"nn",
"en",
"sv",
"da",
"license:llama3.1",
"region:us"
] | text-generation | "2024-11-25T07:19:31Z" | ---
language:
- no # Generic Norwegian
- nb # Norwegian Bokmål
- nn # Norwegian Nynorsk
- en # English
- sv # Swedish
- da # Danish
tags:
- norwegian
- bokmål
- nynorsk
- swedish
- danish
- multilingual
- text-generation
pipeline_tag: text-generation
license: llama3.1
---
## Model Card: NB-Llama-3.1-8B-sft
---
### Model Overview
This is the SFT-version of the NB-Llama-models. This means the model has gone through supervised finetuning, and it now understands a basic template. Note that this model has not yet been aligned, so it will behave fairly unpredictable. It is most suited for additional fine tuning.
**NB-Llama-3.1-8B-sft** is part of the **NB-Llama-3.1** series of models, trained on top of [NB-Llama-3.1-8B](https://huggingface.co/NbAiLab/Llama-3.1-8B). This multilingual generative model was fine-tuned specifically to support Norwegian Bokmål, Norwegian Nynorsk, and English, with partial support for Swedish and Danish.
The basic idea with this model series was to explore how current state-of-the-art models could be improved for Norwegian by training only on publicly available data. While these models are trained by the National Library of Norway, they do not include data only available through legal deposit. They do, however, contain public data like governmental reports that are both publicly available and legally deposited.
---
### Key Features
- **Base Model**: Built on NB-Llama-3.1-8B.
- **Languages**:
- Full support: Norwegian Bokmål (nb), Norwegian Nynorsk (nn), English (en).
- Partial support: Swedish (sv), Danish (da).
- **Purpose**: Supports Norwegian-specific tasks such as question-answering, summarization, and language modeling, while being capable of multilingual generation and translation. Efforts have been made to preserve the English capabilities from the underlying Meta Llama model.
- **Training Data**: Combines publicly available multilingual datasets with synthetic data generation, focusing on Norwegian, English, Swedish, and Danish sources. Additional details are provided below.
- **Architecture**: The model uses the Llama 3.1 architecture. It is an auto-regressive language model with an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) for alignment.
---
### Model Details
- **Developer**: National Library of Norway (NB-AiLab).
- **Parameters**: 8 billion.
- **Knowledge Cutoff**: May 2024.
- **License**: [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3.1/LICENSE).
---
### Motivation
The primary goal of **NB-Llama-3.1-8B-sft** is to advance support for Norwegian language technologies and strengthen support for Norwegian Bokmål and Norwegian Nynorsk. Since much knowledge and culture are also expressed in English, Swedish, and Danish, open sources in these languages are included in the training datasets when possible.
---
### Intended Use
#### Use Cases
- Dialogue systems.
- General multilingual text generation and language modeling.
- Norwegian-specific tasks such as:
- Summarization of texts in Bokmål or Nynorsk.
- Question-answering tailored to Norwegian cultural and linguistic contexts.
#### Out-of-Scope
- Use in violation of applicable laws or regulations.
- Tasks outside the supported languages without additional fine-tuning.
- High-risk domains without appropriate safety measures.
---
### How to Use
Please note tht this is still a research project, and the purpose of releasing the models are to investigate the potential in adapting these models for Norwegian language. The intented use case is experiemental. For end-users, we strongly recommend using the instruction-tuned models. We provide quantized models with close to the same accuracy that will run much faster on most platforms. When fine-tuning the instruction-tuned models, best results are obtained when applying the appropriate templates from Llama 3.1.
#### Using `transformers`
```python
import torch
from transformers import pipeline
model_id = "NbAiLab/nb-llama-3.1-8B-sft"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "user", "content": "Hvem er du?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
---
### Training Data
**Overview:**
The training data is based entirely on publicly available datasets and synthetically generated data. A key aspect of the training process was leveraging high-quality knowledge sources in Norwegian, English, Swedish, and Danish.
Parts of the following publicly available datasets were used:
- [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX)
- [High Performance Language Technologies (HPLT)](https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2)
- [Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NCC/Norwegian-Colossal-Corpus)
- [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
---
### Data Selection
To ensure the highest quality training data, only a small subset of the original raw data was used. [Corpus Quality Classifiers](https://huggingface.
co/collections/NbAiLab/corpus-quality-classifier-673f15926c2774fcc88f23aa) built on [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) were
trained to evaluate both educational value and linguistic quality of the training samples. These models are released along with the NB-Llama-3.x models, and are considered the main output from this initiative.
- **Categorization Methods:**
- Inspired by the [FineWeb](https://example.com/FineWeb) project.
- Evaluated for:
- **Educational Value:** Prioritizing high-value training samples.
- **Linguistic Quality:** Ensuring clarity and accuracy in training data.
- **Guidance and Release:**
- Categorization was guided by insights from [Gemini 1.5](https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#gemini-15).
- The classifiers are released alongside this model and are [available here](https://classifier-release-link-here).
---
### Licensing
The model is released under the [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3.1/LICENSE), allowing for research and commercial use within defined limitations. Refer to the [Acceptable Use Policy](https://llama.meta.com/llama3.1/use-policy) for specific restrictions.
---
### Citing & Authors
The model was trained and documentation written by Per Egil Kummervold as part of the NoTraM-project.
---
### Funding and Acknowledgement
Training this model was supported by Google’s TPU Research Cloud (TRC), which generously supplied us with Cloud TPUs essential for our computational
needs. |
robiual-awal/f5f98928-cf62-4ddf-bf84-8af459012a31 | robiual-awal | "2025-02-11T20:08:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-11T19:54:46Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5f98928-cf62-4ddf-bf84-8af459012a31
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# f5f98928-cf62-4ddf-bf84-8af459012a31
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
YakovElm/Hyperledger15SetFitModel | YakovElm | "2023-05-20T18:20:40Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-05-20T16:19:36Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Hyperledger15SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Hyperledger15SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
breckio/gangu.chettri.kanda.telegram.gangu.chhetri.kanda.telegram | breckio | "2025-04-14T15:59:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-14T15:59:06Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MaziyarPanahi/T3qm7xNeuralsirkrishna_Experiment27Shadow | MaziyarPanahi | "2024-04-08T01:18:41Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment27Shadow-7B",
"base_model:merge:automerger/Experiment27Shadow-7B",
"base_model:automerger/T3qm7xNeuralsirkrishna-7B",
"base_model:merge:automerger/T3qm7xNeuralsirkrishna-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T01:05:32Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: T3qm7xNeuralsirkrishna_Experiment27Shadow
base_model:
- automerger/T3qm7xNeuralsirkrishna-7B
- automerger/Experiment27Shadow-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# T3qm7xNeuralsirkrishna_Experiment27Shadow
T3qm7xNeuralsirkrishna_Experiment27Shadow is a merge of the following models:
* [automerger/T3qm7xNeuralsirkrishna-7B](https://huggingface.co/automerger/T3qm7xNeuralsirkrishna-7B)
* [automerger/Experiment27Shadow-7B](https://huggingface.co/automerger/Experiment27Shadow-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/T3qm7xNeuralsirkrishna_Experiment27Shadow"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
LarryAIDraw/viewerMix_viewerMixV10 | LarryAIDraw | "2023-02-13T22:19:45Z" | 0 | 4 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-13T20:44:14Z" | ---
license: creativeml-openrail-m
---
|
headmaking/trpfrog-icons | headmaking | "2023-01-27T13:39:57Z" | 3 | 0 | diffusers | [
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"wildcard",
"dataset:TrpFrog/trpfrog-icons",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-16T10:28:16Z" |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- wildcard
datasets:
- TrpFrog/trpfrog-icons
widget:
- text: an icon of trpfrog
---
# DreamBooth model for the trpfrog concept trained by Prgckwb on the TrpFrog/trpfrog-icons dataset.
This is a Stable Diffusion model fine-tuned on the trpfrog concept with DreamBooth. It can be used by modifying the `instance_prompt`: **an icon of trpfrog** |
Govind-Vurjana/en_pipeline | Govind-Vurjana | "2024-02-26T20:32:10Z" | 0 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | "2024-02-26T20:12:42Z" | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9851430668
- name: NER Recall
type: recall
value: 0.9871347179
- name: NER F Score
type: f_score
value: 0.9861378867
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (16 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `AGE GROUP`, `BENCHMARKS`, `CADAVERS/BIOMECHANICAL/NONCLINICAL`, `DEVICE`, `FOLLOW UP`, `INDICATION`, `LEVEL OF EVIDENCE `, `MANAGEMENT`, `PERFORMANCE OUTCOMES`, `RISK FACTORS`, `SAFETY OUTCOMES`, `SCORES`, `SEX`, `STATISTICAL SIGNIFICANCE`, `STRYKER`, `STUDY TYPE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 98.61 |
| `ENTS_P` | 98.51 |
| `ENTS_R` | 98.71 |
| `TOK2VEC_LOSS` | 44342.33 |
| `NER_LOSS` | 204832.98 | |
kk-aivio/23900514-66b2-419d-a45d-70b7ab5b7a9e | kk-aivio | "2025-02-28T09:25:45Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"region:us"
] | null | "2025-02-28T09:25:41Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/tinyllama-chat
model-index:
- name: kk-aivio/23900514-66b2-419d-a45d-70b7ab5b7a9e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/23900514-66b2-419d-a45d-70b7ab5b7a9e
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pfunk/CartPole-v1-CP_DQN-seed355 | pfunk | "2023-03-20T19:09:44Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-20T19:09:41Z" | ---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 140.22 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQN.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQN]"
python -m cleanrl_utils.enjoy --exp-name CP_DQN --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQN-seed355/raw/main/dqn.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQN-seed355/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQN-seed355/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQN --seed 355
```
# Hyperparameters
```python
{'alg_type': 'dqn.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQN',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'save_model': True,
'seed': 355,
'start_e': 1.0,
'target_network_frequency': 100,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
FounderOfHuggingface/gpt2_lora_r16_ag_news_t200_e5_member_shadow6 | FounderOfHuggingface | "2023-11-30T12:52:26Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-11-30T12:52:23Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
elewah/gpt2-ag-news-peft | elewah | "2025-04-03T13:10:51Z" | 113 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2025-03-30T11:01:20Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF | tensorblock | "2024-11-16T01:16:45Z" | 426 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-12T07:30:25Z" | ---
license: other
license_name: deepseek-license
license_link: LICENSE
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct - GGUF
This repo contains GGUF format model files for [deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin▁of▁sentence|>{system_prompt}
User: {prompt}
Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf) | Q2_K | 5.989 GB | smallest, significant quality loss - not recommended for most purposes |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_S.gguf) | Q3_K_S | 6.973 GB | very small, high quality loss |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_M.gguf) | Q3_K_M | 7.568 GB | very small, high quality loss |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_L.gguf) | Q3_K_L | 7.878 GB | small, substantial quality loss |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_0.gguf) | Q4_0 | 8.294 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_S.gguf) | Q4_K_S | 8.879 GB | small, greater quality loss |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf) | Q4_K_M | 9.653 GB | medium, balanced quality - recommended |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_0.gguf) | Q5_0 | 10.097 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_S.gguf) | Q5_K_S | 10.378 GB | large, low quality loss - recommended |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M.gguf) | Q5_K_M | 11.037 GB | large, very low quality loss - recommended |
| [DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf) | Q6_K | 13.101 GB | very large, extremely low quality loss |
| [DeepSeek-Coder-V2-Lite-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q8_0.gguf) | Q8_0 | 15.555 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF --include "DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/DeepSeek-Coder-V2-Lite-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Helsinki-NLP/opus-mt-alv-en | Helsinki-NLP | "2023-08-16T11:25:30Z" | 122 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- alv
- en
tags:
- translation
license: apache-2.0
---
### alv-eng
* source group: Atlantic-Congo languages
* target group: English
* OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md)
* model: transformer
* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 |
| Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 |
| Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 |
| Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 |
| Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 |
| Tatoeba-test.multi.eng | 20.9 | 0.376 |
| Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 |
| Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 |
| Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 |
| Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 |
| Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 |
| Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 |
| Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 |
| Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 |
| Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 |
| Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 |
| Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 |
### System Info:
- hf_name: alv-eng
- source_languages: alv
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
- src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt
- src_alpha3: alv
- tgt_alpha3: eng
- short_pair: alv-en
- chrF2_score: 0.376
- bleu: 20.9
- brevity_penalty: 1.0
- ref_len: 15208.0
- src_name: Atlantic-Congo languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: alv
- tgt_alpha2: en
- prefer_old: False
- long_pair: alv-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
jondurbin/airoboros-65b-gpt4-1.4 | jondurbin | "2023-06-30T13:17:25Z" | 1,432 | 17 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-29T09:26:55Z" | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.4
---
__not yet tested!__
## Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
Dataset used [airoboros-gpt4-1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
Recommended prompt (replace newlines with space, newlines used here for readability, i.e. the entire prompt on one line):
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
AntoineD/camembert_classification_tools_fr | AntoineD | "2023-11-09T09:57:05Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-09T09:51:05Z" | ---
license: mit
base_model: camembert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: camembert_classification_tools_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_classification_tools_fr
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1971
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.9901 | 0.45 |
| No log | 2.0 | 10 | 1.8450 | 0.525 |
| No log | 3.0 | 15 | 1.6934 | 0.625 |
| No log | 4.0 | 20 | 1.5597 | 0.65 |
| No log | 5.0 | 25 | 1.4364 | 0.775 |
| No log | 6.0 | 30 | 1.3490 | 0.825 |
| No log | 7.0 | 35 | 1.2849 | 0.825 |
| No log | 8.0 | 40 | 1.2342 | 0.85 |
| No log | 9.0 | 45 | 1.2078 | 0.8 |
| No log | 10.0 | 50 | 1.1971 | 0.8 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ThuyNT/CS505_COQE_viT5_train_Instruction4_SOAPL_v1 | ThuyNT | "2024-05-05T03:01:09Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-01T22:06:31Z" | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_train_Instruction4_SOAPL_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction4_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
aoxo/joanne_experimental_lora_Llama-3.2-1B | aoxo | "2025-03-15T19:47:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"region:us"
] | null | "2025-03-15T19:47:03Z" | ---
base_model: meta-llama/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
vishal042002/Llama3.2-3b-Instruct-ClinicalSurgery | vishal042002 | "2024-11-04T12:14:47Z" | 109 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"medical",
"Book2Data",
"finetune",
"Ragbased-q&a",
"conversational",
"en",
"dataset:vishal042002/Clinical-surgery",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-03T11:45:50Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
datasets:
- vishal042002/Clinical-surgery
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- medical
- Book2Data
- finetune
- Ragbased-q&a
- safetensors
---
# Uploaded model
- **Developed by:** vishal042002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
The model was trained on a custom dataset containing clinical surgery Q&A pairs. The dataset was compiled from:
Open-source medical books
RUNNING THE MODEL THROUGH ADAPTER MERGE:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model_name = "unsloth/Llama-3.2-3B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype=torch.float16, device_map="auto")
adapter_path = "vishal042002/Llama3.2-3b-Instruct-ClinicalSurgery"
base_model = PeftModel.from_pretrained(base_model, adapter_path)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
base_model.to(device)
# Sample usage
input_text = "What is the mortality rate for patients requiring surgical intervention who were unstable preoperatively?"
inputs = tokenizer(input_text, return_tensors="pt").to(device)
outputs = base_model.generate(**inputs, max_new_tokens=200, temperature=1.5, top_p=0.9)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded_output)
```
LOADING THE MODEL DIRECTLY:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "vishal042002/Llama3.2-3b-Instruct-ClinicalSurgery"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
This model is designed to:
Answer questions about clinical surgery procedures.
Provide information about surgical interventions.
Limitations:
The model should not be used as a substitute for professional medical advice.
Responses should be verified by qualified medical professionals. |
ImNotATeapot/lora_model | ImNotATeapot | "2025-03-29T14:07:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-29T13:40:05Z" | ---
base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ImNotATeapot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JuniperChinenye/gugu4 | JuniperChinenye | "2024-12-26T22:23:09Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-26T22:20:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rewicks/baseline_en-de_64k_ep34 | rewicks | "2024-10-28T16:24:50Z" | 117 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-10-28T16:23:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/h2o-danube2-1.8b-sft-4bit | mlx-community | "2024-04-09T09:36:52Z" | 58 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"mlx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-09T09:25:22Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
- mlx
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
pipeline_tag: text-generation
---
# mlx-community/h2o-danube2-1.8b-sft-4bit
This model was converted to MLX format from [`h2oai/h2o-danube2-1.8b-sft`]() using mlx-lm version **0.8.0**.
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
Refer to the [original model card](https://huggingface.co/h2oai/h2o-danube2-1.8b-sft) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/h2o-danube2-1.8b-sft-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
income/bpr-gpl-fever-base-msmarco-distilbert-tas-b | income | "2022-06-16T18:22:17Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-06-16T18:22:11Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 169267 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Yureeh/ppo-CartPole-v1 | Yureeh | "2023-04-06T15:23:21Z" | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-06T15:22:49Z" | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 197.90 +/- 63.33
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Yureeh/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
microsoft/trocr-large-printed | microsoft | "2024-05-27T20:09:18Z" | 249,003 | 156 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"trocr",
"image-to-text",
"arxiv:2109.10282",
"endpoints_compatible",
"region:us"
] | image-to-text | "2022-03-02T23:29:05Z" | ---
tags:
- trocr
- image-to-text
widget:
- src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X00016469612_1.jpg
example_title: Printed 1
- src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X51005255805_7.jpg
example_title: Printed 2
- src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X51005745214_6.jpg
example_title: Printed 3
---
# TrOCR (large-sized model, fine-tuned on SROIE)
TrOCR model fine-tuned on the [SROIE dataset](https://rrc.cvc.uab.es/?ch=13). It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database (actually this model is meant to be used on printed text)
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed')
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### BibTeX entry and citation info
```bibtex
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crossroderick/ppo-Pyramids | crossroderick | "2024-03-05T23:32:58Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-03-05T23:32:53Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: crossroderick/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MayBashendy/ArabicNewSplits6_FineTuningAraBERTFreeze_run1_AugV5_k19_task2_organization | MayBashendy | "2024-12-24T14:39:02Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-24T14:13:00Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits6_FineTuningAraBERTFreeze_run1_AugV5_k19_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits6_FineTuningAraBERTFreeze_run1_AugV5_k19_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7953
- Qwk: 0.5064
- Mse: 0.7953
- Rmse: 0.8918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0426 | 2 | 6.3996 | -0.0218 | 6.3996 | 2.5297 |
| No log | 0.0851 | 4 | 4.4435 | -0.0328 | 4.4435 | 2.1080 |
| No log | 0.1277 | 6 | 3.1044 | 0.0151 | 3.1044 | 1.7619 |
| No log | 0.1702 | 8 | 2.1782 | 0.0094 | 2.1782 | 1.4759 |
| No log | 0.2128 | 10 | 1.5328 | 0.0563 | 1.5328 | 1.2381 |
| No log | 0.2553 | 12 | 1.2230 | 0.0543 | 1.2230 | 1.1059 |
| No log | 0.2979 | 14 | 1.0955 | 0.0354 | 1.0955 | 1.0466 |
| No log | 0.3404 | 16 | 1.1726 | 0.0326 | 1.1726 | 1.0829 |
| No log | 0.3830 | 18 | 1.0884 | 0.0351 | 1.0884 | 1.0433 |
| No log | 0.4255 | 20 | 0.8684 | 0.0442 | 0.8684 | 0.9319 |
| No log | 0.4681 | 22 | 0.7519 | 0.1786 | 0.7519 | 0.8671 |
| No log | 0.5106 | 24 | 0.7268 | 0.1875 | 0.7268 | 0.8525 |
| No log | 0.5532 | 26 | 0.7315 | 0.2044 | 0.7315 | 0.8553 |
| No log | 0.5957 | 28 | 0.7553 | 0.2498 | 0.7553 | 0.8691 |
| No log | 0.6383 | 30 | 0.8324 | 0.2282 | 0.8324 | 0.9123 |
| No log | 0.6809 | 32 | 0.9401 | 0.1705 | 0.9401 | 0.9696 |
| No log | 0.7234 | 34 | 1.0287 | 0.1040 | 1.0287 | 1.0143 |
| No log | 0.7660 | 36 | 1.0050 | 0.1141 | 1.0050 | 1.0025 |
| No log | 0.8085 | 38 | 0.9775 | 0.1670 | 0.9775 | 0.9887 |
| No log | 0.8511 | 40 | 0.9461 | 0.1927 | 0.9461 | 0.9727 |
| No log | 0.8936 | 42 | 0.8578 | 0.2127 | 0.8578 | 0.9262 |
| No log | 0.9362 | 44 | 0.7840 | 0.2766 | 0.7840 | 0.8854 |
| No log | 0.9787 | 46 | 0.7278 | 0.2390 | 0.7278 | 0.8531 |
| No log | 1.0213 | 48 | 0.7476 | 0.3036 | 0.7476 | 0.8646 |
| No log | 1.0638 | 50 | 0.7187 | 0.3078 | 0.7187 | 0.8477 |
| No log | 1.1064 | 52 | 0.6941 | 0.3402 | 0.6941 | 0.8331 |
| No log | 1.1489 | 54 | 0.6694 | 0.3983 | 0.6694 | 0.8182 |
| No log | 1.1915 | 56 | 0.6368 | 0.4031 | 0.6368 | 0.7980 |
| No log | 1.2340 | 58 | 0.6198 | 0.4065 | 0.6198 | 0.7873 |
| No log | 1.2766 | 60 | 0.6285 | 0.4288 | 0.6285 | 0.7928 |
| No log | 1.3191 | 62 | 0.6349 | 0.4316 | 0.6349 | 0.7968 |
| No log | 1.3617 | 64 | 0.6406 | 0.4316 | 0.6406 | 0.8004 |
| No log | 1.4043 | 66 | 0.6828 | 0.3875 | 0.6828 | 0.8263 |
| No log | 1.4468 | 68 | 0.7990 | 0.3558 | 0.7990 | 0.8939 |
| No log | 1.4894 | 70 | 0.9981 | 0.3805 | 0.9981 | 0.9991 |
| No log | 1.5319 | 72 | 1.1016 | 0.3394 | 1.1016 | 1.0496 |
| No log | 1.5745 | 74 | 0.9917 | 0.3830 | 0.9917 | 0.9959 |
| No log | 1.6170 | 76 | 0.7962 | 0.4130 | 0.7962 | 0.8923 |
| No log | 1.6596 | 78 | 0.6587 | 0.4430 | 0.6587 | 0.8116 |
| No log | 1.7021 | 80 | 0.5785 | 0.4417 | 0.5785 | 0.7606 |
| No log | 1.7447 | 82 | 0.5639 | 0.4151 | 0.5639 | 0.7509 |
| No log | 1.7872 | 84 | 0.5928 | 0.4264 | 0.5928 | 0.7700 |
| No log | 1.8298 | 86 | 0.6783 | 0.3920 | 0.6783 | 0.8236 |
| No log | 1.8723 | 88 | 0.8102 | 0.3877 | 0.8102 | 0.9001 |
| No log | 1.9149 | 90 | 0.8701 | 0.3331 | 0.8701 | 0.9328 |
| No log | 1.9574 | 92 | 0.8514 | 0.3698 | 0.8514 | 0.9227 |
| No log | 2.0 | 94 | 0.7873 | 0.4066 | 0.7873 | 0.8873 |
| No log | 2.0426 | 96 | 0.7087 | 0.4367 | 0.7087 | 0.8419 |
| No log | 2.0851 | 98 | 0.6501 | 0.4640 | 0.6501 | 0.8063 |
| No log | 2.1277 | 100 | 0.6195 | 0.4459 | 0.6195 | 0.7871 |
| No log | 2.1702 | 102 | 0.6316 | 0.4358 | 0.6316 | 0.7948 |
| No log | 2.2128 | 104 | 0.6709 | 0.4531 | 0.6709 | 0.8191 |
| No log | 2.2553 | 106 | 0.6887 | 0.4566 | 0.6887 | 0.8299 |
| No log | 2.2979 | 108 | 0.7561 | 0.4624 | 0.7561 | 0.8695 |
| No log | 2.3404 | 110 | 0.8088 | 0.4619 | 0.8088 | 0.8993 |
| No log | 2.3830 | 112 | 0.9118 | 0.3691 | 0.9118 | 0.9549 |
| No log | 2.4255 | 114 | 0.8701 | 0.3880 | 0.8701 | 0.9328 |
| No log | 2.4681 | 116 | 0.7794 | 0.4617 | 0.7794 | 0.8828 |
| No log | 2.5106 | 118 | 0.6725 | 0.4736 | 0.6725 | 0.8201 |
| No log | 2.5532 | 120 | 0.5859 | 0.5030 | 0.5859 | 0.7655 |
| No log | 2.5957 | 122 | 0.5695 | 0.4614 | 0.5695 | 0.7546 |
| No log | 2.6383 | 124 | 0.5736 | 0.4733 | 0.5736 | 0.7574 |
| No log | 2.6809 | 126 | 0.5990 | 0.4679 | 0.5990 | 0.7739 |
| No log | 2.7234 | 128 | 0.6615 | 0.4976 | 0.6615 | 0.8133 |
| No log | 2.7660 | 130 | 0.6906 | 0.4812 | 0.6906 | 0.8310 |
| No log | 2.8085 | 132 | 0.6672 | 0.4776 | 0.6672 | 0.8168 |
| No log | 2.8511 | 134 | 0.6247 | 0.5098 | 0.6247 | 0.7904 |
| No log | 2.8936 | 136 | 0.6121 | 0.5235 | 0.6121 | 0.7824 |
| No log | 2.9362 | 138 | 0.6026 | 0.5215 | 0.6026 | 0.7763 |
| No log | 2.9787 | 140 | 0.5990 | 0.4817 | 0.5990 | 0.7739 |
| No log | 3.0213 | 142 | 0.6343 | 0.4653 | 0.6343 | 0.7964 |
| No log | 3.0638 | 144 | 0.6910 | 0.4807 | 0.6910 | 0.8313 |
| No log | 3.1064 | 146 | 0.7834 | 0.4874 | 0.7834 | 0.8851 |
| No log | 3.1489 | 148 | 0.7861 | 0.4942 | 0.7861 | 0.8866 |
| No log | 3.1915 | 150 | 0.7120 | 0.5082 | 0.7120 | 0.8438 |
| No log | 3.2340 | 152 | 0.6583 | 0.4849 | 0.6583 | 0.8113 |
| No log | 3.2766 | 154 | 0.6776 | 0.4766 | 0.6776 | 0.8232 |
| No log | 3.3191 | 156 | 0.7384 | 0.4608 | 0.7384 | 0.8593 |
| No log | 3.3617 | 158 | 0.8306 | 0.4368 | 0.8306 | 0.9114 |
| No log | 3.4043 | 160 | 0.8937 | 0.4532 | 0.8937 | 0.9454 |
| No log | 3.4468 | 162 | 0.9271 | 0.4725 | 0.9271 | 0.9629 |
| No log | 3.4894 | 164 | 0.8066 | 0.4234 | 0.8066 | 0.8981 |
| No log | 3.5319 | 166 | 0.7051 | 0.4913 | 0.7051 | 0.8397 |
| No log | 3.5745 | 168 | 0.6770 | 0.4603 | 0.6770 | 0.8228 |
| No log | 3.6170 | 170 | 0.6740 | 0.4776 | 0.6740 | 0.8210 |
| No log | 3.6596 | 172 | 0.6675 | 0.4991 | 0.6675 | 0.8170 |
| No log | 3.7021 | 174 | 0.6975 | 0.4991 | 0.6975 | 0.8352 |
| No log | 3.7447 | 176 | 0.7390 | 0.4947 | 0.7390 | 0.8596 |
| No log | 3.7872 | 178 | 0.7033 | 0.4986 | 0.7033 | 0.8386 |
| No log | 3.8298 | 180 | 0.6414 | 0.4977 | 0.6414 | 0.8009 |
| No log | 3.8723 | 182 | 0.6045 | 0.5620 | 0.6045 | 0.7775 |
| No log | 3.9149 | 184 | 0.5970 | 0.5640 | 0.5970 | 0.7727 |
| No log | 3.9574 | 186 | 0.6075 | 0.5552 | 0.6075 | 0.7794 |
| No log | 4.0 | 188 | 0.6373 | 0.5459 | 0.6373 | 0.7983 |
| No log | 4.0426 | 190 | 0.6769 | 0.5461 | 0.6769 | 0.8227 |
| No log | 4.0851 | 192 | 0.6767 | 0.5588 | 0.6767 | 0.8226 |
| No log | 4.1277 | 194 | 0.6600 | 0.4669 | 0.6600 | 0.8124 |
| No log | 4.1702 | 196 | 0.6808 | 0.4811 | 0.6808 | 0.8251 |
| No log | 4.2128 | 198 | 0.6947 | 0.4937 | 0.6947 | 0.8335 |
| No log | 4.2553 | 200 | 0.6976 | 0.4881 | 0.6976 | 0.8352 |
| No log | 4.2979 | 202 | 0.7295 | 0.4537 | 0.7295 | 0.8541 |
| No log | 4.3404 | 204 | 0.8136 | 0.4836 | 0.8136 | 0.9020 |
| No log | 4.3830 | 206 | 1.0381 | 0.3553 | 1.0381 | 1.0189 |
| No log | 4.4255 | 208 | 1.1739 | 0.3337 | 1.1739 | 1.0835 |
| No log | 4.4681 | 210 | 1.0770 | 0.3537 | 1.0770 | 1.0378 |
| No log | 4.5106 | 212 | 0.8619 | 0.3999 | 0.8619 | 0.9284 |
| No log | 4.5532 | 214 | 0.7020 | 0.4283 | 0.7020 | 0.8378 |
| No log | 4.5957 | 216 | 0.6700 | 0.4974 | 0.6700 | 0.8185 |
| No log | 4.6383 | 218 | 0.6631 | 0.5133 | 0.6631 | 0.8143 |
| No log | 4.6809 | 220 | 0.6707 | 0.4796 | 0.6707 | 0.8190 |
| No log | 4.7234 | 222 | 0.7140 | 0.4628 | 0.7140 | 0.8450 |
| No log | 4.7660 | 224 | 0.7652 | 0.4678 | 0.7652 | 0.8748 |
| No log | 4.8085 | 226 | 0.7972 | 0.4456 | 0.7972 | 0.8929 |
| No log | 4.8511 | 228 | 0.7540 | 0.4478 | 0.7540 | 0.8683 |
| No log | 4.8936 | 230 | 0.7238 | 0.4651 | 0.7238 | 0.8508 |
| No log | 4.9362 | 232 | 0.7155 | 0.4697 | 0.7155 | 0.8459 |
| No log | 4.9787 | 234 | 0.7087 | 0.4697 | 0.7087 | 0.8418 |
| No log | 5.0213 | 236 | 0.7104 | 0.4851 | 0.7104 | 0.8429 |
| No log | 5.0638 | 238 | 0.7184 | 0.4843 | 0.7184 | 0.8476 |
| No log | 5.1064 | 240 | 0.7342 | 0.4441 | 0.7342 | 0.8568 |
| No log | 5.1489 | 242 | 0.7805 | 0.4579 | 0.7805 | 0.8835 |
| No log | 5.1915 | 244 | 0.8082 | 0.4439 | 0.8082 | 0.8990 |
| No log | 5.2340 | 246 | 0.7669 | 0.4770 | 0.7669 | 0.8757 |
| No log | 5.2766 | 248 | 0.7092 | 0.4831 | 0.7092 | 0.8421 |
| No log | 5.3191 | 250 | 0.6862 | 0.4729 | 0.6862 | 0.8283 |
| No log | 5.3617 | 252 | 0.6834 | 0.5069 | 0.6834 | 0.8267 |
| No log | 5.4043 | 254 | 0.6760 | 0.4718 | 0.6760 | 0.8222 |
| No log | 5.4468 | 256 | 0.6728 | 0.4697 | 0.6728 | 0.8202 |
| No log | 5.4894 | 258 | 0.7054 | 0.4820 | 0.7054 | 0.8399 |
| No log | 5.5319 | 260 | 0.7535 | 0.4994 | 0.7535 | 0.8680 |
| No log | 5.5745 | 262 | 0.7785 | 0.4838 | 0.7785 | 0.8823 |
| No log | 5.6170 | 264 | 0.7830 | 0.4987 | 0.7830 | 0.8849 |
| No log | 5.6596 | 266 | 0.7722 | 0.4733 | 0.7722 | 0.8787 |
| No log | 5.7021 | 268 | 0.7592 | 0.4824 | 0.7592 | 0.8713 |
| No log | 5.7447 | 270 | 0.7412 | 0.4879 | 0.7412 | 0.8609 |
| No log | 5.7872 | 272 | 0.7183 | 0.5020 | 0.7183 | 0.8475 |
| No log | 5.8298 | 274 | 0.7029 | 0.4653 | 0.7029 | 0.8384 |
| No log | 5.8723 | 276 | 0.6876 | 0.4779 | 0.6876 | 0.8292 |
| No log | 5.9149 | 278 | 0.6915 | 0.5084 | 0.6915 | 0.8316 |
| No log | 5.9574 | 280 | 0.6715 | 0.5090 | 0.6715 | 0.8194 |
| No log | 6.0 | 282 | 0.6830 | 0.5018 | 0.6830 | 0.8265 |
| No log | 6.0426 | 284 | 0.7089 | 0.4744 | 0.7089 | 0.8420 |
| No log | 6.0851 | 286 | 0.6974 | 0.4916 | 0.6974 | 0.8351 |
| No log | 6.1277 | 288 | 0.6745 | 0.5263 | 0.6745 | 0.8213 |
| No log | 6.1702 | 290 | 0.6973 | 0.5130 | 0.6973 | 0.8350 |
| No log | 6.2128 | 292 | 0.7266 | 0.5216 | 0.7266 | 0.8524 |
| No log | 6.2553 | 294 | 0.7385 | 0.4806 | 0.7385 | 0.8594 |
| No log | 6.2979 | 296 | 0.7266 | 0.4766 | 0.7266 | 0.8524 |
| No log | 6.3404 | 298 | 0.7255 | 0.4879 | 0.7255 | 0.8518 |
| No log | 6.3830 | 300 | 0.7494 | 0.4677 | 0.7494 | 0.8657 |
| No log | 6.4255 | 302 | 0.7818 | 0.4695 | 0.7818 | 0.8842 |
| No log | 6.4681 | 304 | 0.7754 | 0.4848 | 0.7754 | 0.8806 |
| No log | 6.5106 | 306 | 0.7236 | 0.4896 | 0.7236 | 0.8507 |
| No log | 6.5532 | 308 | 0.7123 | 0.4730 | 0.7123 | 0.8440 |
| No log | 6.5957 | 310 | 0.7204 | 0.4943 | 0.7204 | 0.8488 |
| No log | 6.6383 | 312 | 0.7583 | 0.5040 | 0.7583 | 0.8708 |
| No log | 6.6809 | 314 | 0.8239 | 0.4424 | 0.8239 | 0.9077 |
| No log | 6.7234 | 316 | 0.8018 | 0.4459 | 0.8018 | 0.8954 |
| No log | 6.7660 | 318 | 0.7570 | 0.4655 | 0.7570 | 0.8700 |
| No log | 6.8085 | 320 | 0.7316 | 0.4614 | 0.7316 | 0.8553 |
| No log | 6.8511 | 322 | 0.7119 | 0.4651 | 0.7119 | 0.8437 |
| No log | 6.8936 | 324 | 0.7080 | 0.5048 | 0.7080 | 0.8414 |
| No log | 6.9362 | 326 | 0.7001 | 0.4720 | 0.7001 | 0.8367 |
| No log | 6.9787 | 328 | 0.7060 | 0.5180 | 0.7060 | 0.8402 |
| No log | 7.0213 | 330 | 0.7036 | 0.5135 | 0.7036 | 0.8388 |
| No log | 7.0638 | 332 | 0.6871 | 0.5224 | 0.6871 | 0.8289 |
| No log | 7.1064 | 334 | 0.6936 | 0.5044 | 0.6936 | 0.8328 |
| No log | 7.1489 | 336 | 0.7082 | 0.5038 | 0.7082 | 0.8415 |
| No log | 7.1915 | 338 | 0.7128 | 0.5168 | 0.7128 | 0.8442 |
| No log | 7.2340 | 340 | 0.7219 | 0.5178 | 0.7219 | 0.8497 |
| No log | 7.2766 | 342 | 0.7126 | 0.5178 | 0.7126 | 0.8441 |
| No log | 7.3191 | 344 | 0.7172 | 0.5178 | 0.7172 | 0.8469 |
| No log | 7.3617 | 346 | 0.7009 | 0.5254 | 0.7009 | 0.8372 |
| No log | 7.4043 | 348 | 0.6857 | 0.4890 | 0.6857 | 0.8281 |
| No log | 7.4468 | 350 | 0.6771 | 0.4934 | 0.6771 | 0.8229 |
| No log | 7.4894 | 352 | 0.6655 | 0.5260 | 0.6655 | 0.8158 |
| No log | 7.5319 | 354 | 0.6996 | 0.4824 | 0.6996 | 0.8364 |
| No log | 7.5745 | 356 | 0.7563 | 0.4787 | 0.7563 | 0.8696 |
| No log | 7.6170 | 358 | 0.7559 | 0.4472 | 0.7559 | 0.8695 |
| No log | 7.6596 | 360 | 0.7174 | 0.4563 | 0.7174 | 0.8470 |
| No log | 7.7021 | 362 | 0.6991 | 0.4752 | 0.6991 | 0.8361 |
| No log | 7.7447 | 364 | 0.7068 | 0.5157 | 0.7068 | 0.8407 |
| No log | 7.7872 | 366 | 0.7185 | 0.5164 | 0.7185 | 0.8476 |
| No log | 7.8298 | 368 | 0.7191 | 0.4917 | 0.7191 | 0.8480 |
| No log | 7.8723 | 370 | 0.7222 | 0.4944 | 0.7222 | 0.8498 |
| No log | 7.9149 | 372 | 0.7198 | 0.5116 | 0.7198 | 0.8484 |
| No log | 7.9574 | 374 | 0.7140 | 0.4456 | 0.7140 | 0.8450 |
| No log | 8.0 | 376 | 0.7053 | 0.4742 | 0.7053 | 0.8398 |
| No log | 8.0426 | 378 | 0.7074 | 0.4379 | 0.7074 | 0.8411 |
| No log | 8.0851 | 380 | 0.7098 | 0.4231 | 0.7098 | 0.8425 |
| No log | 8.1277 | 382 | 0.7178 | 0.4493 | 0.7178 | 0.8472 |
| No log | 8.1702 | 384 | 0.7099 | 0.4210 | 0.7099 | 0.8426 |
| No log | 8.2128 | 386 | 0.7044 | 0.4141 | 0.7044 | 0.8393 |
| No log | 8.2553 | 388 | 0.7146 | 0.4520 | 0.7146 | 0.8453 |
| No log | 8.2979 | 390 | 0.7399 | 0.4186 | 0.7399 | 0.8602 |
| No log | 8.3404 | 392 | 0.8051 | 0.4184 | 0.8051 | 0.8973 |
| No log | 8.3830 | 394 | 0.8608 | 0.4434 | 0.8608 | 0.9278 |
| No log | 8.4255 | 396 | 0.8381 | 0.4274 | 0.8381 | 0.9155 |
| No log | 8.4681 | 398 | 0.7955 | 0.4193 | 0.7955 | 0.8919 |
| No log | 8.5106 | 400 | 0.7937 | 0.4298 | 0.7937 | 0.8909 |
| No log | 8.5532 | 402 | 0.7907 | 0.4456 | 0.7907 | 0.8892 |
| No log | 8.5957 | 404 | 0.7832 | 0.4577 | 0.7832 | 0.8850 |
| No log | 8.6383 | 406 | 0.7584 | 0.4605 | 0.7584 | 0.8708 |
| No log | 8.6809 | 408 | 0.7452 | 0.4549 | 0.7452 | 0.8632 |
| No log | 8.7234 | 410 | 0.7360 | 0.4657 | 0.7360 | 0.8579 |
| No log | 8.7660 | 412 | 0.7359 | 0.4755 | 0.7359 | 0.8579 |
| No log | 8.8085 | 414 | 0.7467 | 0.4660 | 0.7467 | 0.8641 |
| No log | 8.8511 | 416 | 0.7184 | 0.4698 | 0.7184 | 0.8476 |
| No log | 8.8936 | 418 | 0.7024 | 0.5217 | 0.7024 | 0.8381 |
| No log | 8.9362 | 420 | 0.7173 | 0.5201 | 0.7173 | 0.8469 |
| No log | 8.9787 | 422 | 0.7355 | 0.5201 | 0.7355 | 0.8576 |
| No log | 9.0213 | 424 | 0.7429 | 0.4877 | 0.7429 | 0.8619 |
| No log | 9.0638 | 426 | 0.7701 | 0.4322 | 0.7701 | 0.8775 |
| No log | 9.1064 | 428 | 0.7964 | 0.4759 | 0.7964 | 0.8924 |
| No log | 9.1489 | 430 | 0.7830 | 0.4603 | 0.7830 | 0.8849 |
| No log | 9.1915 | 432 | 0.7755 | 0.4745 | 0.7755 | 0.8807 |
| No log | 9.2340 | 434 | 0.7564 | 0.4708 | 0.7564 | 0.8697 |
| No log | 9.2766 | 436 | 0.7410 | 0.4883 | 0.7410 | 0.8608 |
| No log | 9.3191 | 438 | 0.7393 | 0.4908 | 0.7393 | 0.8598 |
| No log | 9.3617 | 440 | 0.7584 | 0.4767 | 0.7584 | 0.8708 |
| No log | 9.4043 | 442 | 0.8001 | 0.4827 | 0.8001 | 0.8945 |
| No log | 9.4468 | 444 | 0.7730 | 0.4892 | 0.7730 | 0.8792 |
| No log | 9.4894 | 446 | 0.7271 | 0.4964 | 0.7271 | 0.8527 |
| No log | 9.5319 | 448 | 0.7335 | 0.4904 | 0.7335 | 0.8564 |
| No log | 9.5745 | 450 | 0.7336 | 0.4959 | 0.7336 | 0.8565 |
| No log | 9.6170 | 452 | 0.7225 | 0.4370 | 0.7225 | 0.8500 |
| No log | 9.6596 | 454 | 0.7278 | 0.4374 | 0.7278 | 0.8531 |
| No log | 9.7021 | 456 | 0.7265 | 0.4505 | 0.7265 | 0.8524 |
| No log | 9.7447 | 458 | 0.7190 | 0.4505 | 0.7190 | 0.8479 |
| No log | 9.7872 | 460 | 0.7404 | 0.4722 | 0.7404 | 0.8605 |
| No log | 9.8298 | 462 | 0.7751 | 0.4272 | 0.7751 | 0.8804 |
| No log | 9.8723 | 464 | 0.7695 | 0.4161 | 0.7695 | 0.8772 |
| No log | 9.9149 | 466 | 0.7485 | 0.4815 | 0.7485 | 0.8651 |
| No log | 9.9574 | 468 | 0.7421 | 0.4576 | 0.7421 | 0.8614 |
| No log | 10.0 | 470 | 0.7615 | 0.4449 | 0.7615 | 0.8726 |
| No log | 10.0426 | 472 | 0.7822 | 0.4395 | 0.7822 | 0.8844 |
| No log | 10.0851 | 474 | 0.7802 | 0.4339 | 0.7802 | 0.8833 |
| No log | 10.1277 | 476 | 0.7895 | 0.4451 | 0.7895 | 0.8885 |
| No log | 10.1702 | 478 | 0.7676 | 0.4560 | 0.7676 | 0.8761 |
| No log | 10.2128 | 480 | 0.7461 | 0.4671 | 0.7461 | 0.8638 |
| No log | 10.2553 | 482 | 0.7448 | 0.4493 | 0.7448 | 0.8630 |
| No log | 10.2979 | 484 | 0.7296 | 0.4861 | 0.7296 | 0.8541 |
| No log | 10.3404 | 486 | 0.7119 | 0.4855 | 0.7119 | 0.8437 |
| No log | 10.3830 | 488 | 0.7111 | 0.5305 | 0.7111 | 0.8433 |
| No log | 10.4255 | 490 | 0.7098 | 0.5137 | 0.7098 | 0.8425 |
| No log | 10.4681 | 492 | 0.7150 | 0.5235 | 0.7150 | 0.8456 |
| No log | 10.5106 | 494 | 0.7132 | 0.5246 | 0.7132 | 0.8445 |
| No log | 10.5532 | 496 | 0.7030 | 0.5328 | 0.7030 | 0.8385 |
| No log | 10.5957 | 498 | 0.7163 | 0.4601 | 0.7163 | 0.8463 |
| 0.5377 | 10.6383 | 500 | 0.7326 | 0.4476 | 0.7326 | 0.8559 |
| 0.5377 | 10.6809 | 502 | 0.7244 | 0.4731 | 0.7244 | 0.8511 |
| 0.5377 | 10.7234 | 504 | 0.7159 | 0.5217 | 0.7159 | 0.8461 |
| 0.5377 | 10.7660 | 506 | 0.7419 | 0.4906 | 0.7419 | 0.8613 |
| 0.5377 | 10.8085 | 508 | 0.7596 | 0.4819 | 0.7596 | 0.8716 |
| 0.5377 | 10.8511 | 510 | 0.7617 | 0.4964 | 0.7617 | 0.8728 |
| 0.5377 | 10.8936 | 512 | 0.7486 | 0.5120 | 0.7486 | 0.8652 |
| 0.5377 | 10.9362 | 514 | 0.7470 | 0.5123 | 0.7470 | 0.8643 |
| 0.5377 | 10.9787 | 516 | 0.7542 | 0.5211 | 0.7542 | 0.8685 |
| 0.5377 | 11.0213 | 518 | 0.7601 | 0.5161 | 0.7601 | 0.8718 |
| 0.5377 | 11.0638 | 520 | 0.7454 | 0.5145 | 0.7454 | 0.8634 |
| 0.5377 | 11.1064 | 522 | 0.7325 | 0.5287 | 0.7325 | 0.8559 |
| 0.5377 | 11.1489 | 524 | 0.7213 | 0.5177 | 0.7213 | 0.8493 |
| 0.5377 | 11.1915 | 526 | 0.6962 | 0.5706 | 0.6962 | 0.8344 |
| 0.5377 | 11.2340 | 528 | 0.6844 | 0.5451 | 0.6844 | 0.8273 |
| 0.5377 | 11.2766 | 530 | 0.6872 | 0.5523 | 0.6872 | 0.8289 |
| 0.5377 | 11.3191 | 532 | 0.6907 | 0.5192 | 0.6907 | 0.8311 |
| 0.5377 | 11.3617 | 534 | 0.6910 | 0.4826 | 0.6910 | 0.8313 |
| 0.5377 | 11.4043 | 536 | 0.7036 | 0.5103 | 0.7036 | 0.8388 |
| 0.5377 | 11.4468 | 538 | 0.7209 | 0.4811 | 0.7209 | 0.8491 |
| 0.5377 | 11.4894 | 540 | 0.6960 | 0.4879 | 0.6960 | 0.8343 |
| 0.5377 | 11.5319 | 542 | 0.6601 | 0.5335 | 0.6601 | 0.8125 |
| 0.5377 | 11.5745 | 544 | 0.6613 | 0.5325 | 0.6613 | 0.8132 |
| 0.5377 | 11.6170 | 546 | 0.6684 | 0.5220 | 0.6684 | 0.8175 |
| 0.5377 | 11.6596 | 548 | 0.6632 | 0.5194 | 0.6632 | 0.8144 |
| 0.5377 | 11.7021 | 550 | 0.6542 | 0.5474 | 0.6542 | 0.8088 |
| 0.5377 | 11.7447 | 552 | 0.6527 | 0.5636 | 0.6527 | 0.8079 |
| 0.5377 | 11.7872 | 554 | 0.6506 | 0.5143 | 0.6506 | 0.8066 |
| 0.5377 | 11.8298 | 556 | 0.6388 | 0.5460 | 0.6388 | 0.7992 |
| 0.5377 | 11.8723 | 558 | 0.6466 | 0.5474 | 0.6466 | 0.8041 |
| 0.5377 | 11.9149 | 560 | 0.6682 | 0.5490 | 0.6682 | 0.8174 |
| 0.5377 | 11.9574 | 562 | 0.6882 | 0.5418 | 0.6882 | 0.8296 |
| 0.5377 | 12.0 | 564 | 0.7075 | 0.4880 | 0.7075 | 0.8411 |
| 0.5377 | 12.0426 | 566 | 0.7194 | 0.4755 | 0.7194 | 0.8482 |
| 0.5377 | 12.0851 | 568 | 0.7365 | 0.4838 | 0.7365 | 0.8582 |
| 0.5377 | 12.1277 | 570 | 0.7636 | 0.5195 | 0.7636 | 0.8738 |
| 0.5377 | 12.1702 | 572 | 0.7702 | 0.4773 | 0.7702 | 0.8776 |
| 0.5377 | 12.2128 | 574 | 0.7813 | 0.5044 | 0.7813 | 0.8839 |
| 0.5377 | 12.2553 | 576 | 0.7729 | 0.4631 | 0.7729 | 0.8791 |
| 0.5377 | 12.2979 | 578 | 0.7514 | 0.4811 | 0.7514 | 0.8668 |
| 0.5377 | 12.3404 | 580 | 0.7472 | 0.4939 | 0.7472 | 0.8644 |
| 0.5377 | 12.3830 | 582 | 0.7430 | 0.5055 | 0.7430 | 0.8620 |
| 0.5377 | 12.4255 | 584 | 0.7368 | 0.5271 | 0.7368 | 0.8584 |
| 0.5377 | 12.4681 | 586 | 0.7208 | 0.5136 | 0.7208 | 0.8490 |
| 0.5377 | 12.5106 | 588 | 0.7152 | 0.5222 | 0.7152 | 0.8457 |
| 0.5377 | 12.5532 | 590 | 0.7279 | 0.4848 | 0.7279 | 0.8531 |
| 0.5377 | 12.5957 | 592 | 0.7278 | 0.4776 | 0.7278 | 0.8531 |
| 0.5377 | 12.6383 | 594 | 0.7071 | 0.5379 | 0.7071 | 0.8409 |
| 0.5377 | 12.6809 | 596 | 0.7064 | 0.5120 | 0.7064 | 0.8405 |
| 0.5377 | 12.7234 | 598 | 0.7141 | 0.5156 | 0.7141 | 0.8451 |
| 0.5377 | 12.7660 | 600 | 0.7191 | 0.5172 | 0.7191 | 0.8480 |
| 0.5377 | 12.8085 | 602 | 0.7265 | 0.5328 | 0.7265 | 0.8523 |
| 0.5377 | 12.8511 | 604 | 0.7397 | 0.5136 | 0.7397 | 0.8601 |
| 0.5377 | 12.8936 | 606 | 0.7483 | 0.5101 | 0.7483 | 0.8650 |
| 0.5377 | 12.9362 | 608 | 0.7651 | 0.5453 | 0.7651 | 0.8747 |
| 0.5377 | 12.9787 | 610 | 0.7912 | 0.4958 | 0.7912 | 0.8895 |
| 0.5377 | 13.0213 | 612 | 0.7797 | 0.5238 | 0.7797 | 0.8830 |
| 0.5377 | 13.0638 | 614 | 0.7572 | 0.5232 | 0.7572 | 0.8702 |
| 0.5377 | 13.1064 | 616 | 0.7291 | 0.5351 | 0.7291 | 0.8539 |
| 0.5377 | 13.1489 | 618 | 0.7079 | 0.5053 | 0.7079 | 0.8414 |
| 0.5377 | 13.1915 | 620 | 0.6936 | 0.5053 | 0.6936 | 0.8329 |
| 0.5377 | 13.2340 | 622 | 0.6868 | 0.5595 | 0.6868 | 0.8287 |
| 0.5377 | 13.2766 | 624 | 0.7004 | 0.5683 | 0.7004 | 0.8369 |
| 0.5377 | 13.3191 | 626 | 0.7084 | 0.5381 | 0.7084 | 0.8416 |
| 0.5377 | 13.3617 | 628 | 0.7027 | 0.5682 | 0.7027 | 0.8383 |
| 0.5377 | 13.4043 | 630 | 0.7018 | 0.5211 | 0.7018 | 0.8377 |
| 0.5377 | 13.4468 | 632 | 0.7174 | 0.5189 | 0.7174 | 0.8470 |
| 0.5377 | 13.4894 | 634 | 0.7354 | 0.5189 | 0.7354 | 0.8576 |
| 0.5377 | 13.5319 | 636 | 0.7512 | 0.5607 | 0.7512 | 0.8667 |
| 0.5377 | 13.5745 | 638 | 0.7594 | 0.5607 | 0.7594 | 0.8714 |
| 0.5377 | 13.6170 | 640 | 0.7590 | 0.5189 | 0.7590 | 0.8712 |
| 0.5377 | 13.6596 | 642 | 0.7762 | 0.4703 | 0.7762 | 0.8810 |
| 0.5377 | 13.7021 | 644 | 0.7760 | 0.4590 | 0.7760 | 0.8809 |
| 0.5377 | 13.7447 | 646 | 0.7604 | 0.4954 | 0.7604 | 0.8720 |
| 0.5377 | 13.7872 | 648 | 0.7604 | 0.5364 | 0.7604 | 0.8720 |
| 0.5377 | 13.8298 | 650 | 0.7850 | 0.4787 | 0.7850 | 0.8860 |
| 0.5377 | 13.8723 | 652 | 0.7872 | 0.4753 | 0.7872 | 0.8872 |
| 0.5377 | 13.9149 | 654 | 0.7635 | 0.4955 | 0.7635 | 0.8738 |
| 0.5377 | 13.9574 | 656 | 0.7257 | 0.5006 | 0.7257 | 0.8519 |
| 0.5377 | 14.0 | 658 | 0.7098 | 0.5043 | 0.7098 | 0.8425 |
| 0.5377 | 14.0426 | 660 | 0.7189 | 0.5254 | 0.7189 | 0.8479 |
| 0.5377 | 14.0851 | 662 | 0.7258 | 0.4890 | 0.7258 | 0.8519 |
| 0.5377 | 14.1277 | 664 | 0.7451 | 0.5179 | 0.7451 | 0.8632 |
| 0.5377 | 14.1702 | 666 | 0.7831 | 0.5022 | 0.7831 | 0.8849 |
| 0.5377 | 14.2128 | 668 | 0.8178 | 0.4877 | 0.8178 | 0.9043 |
| 0.5377 | 14.2553 | 670 | 0.8218 | 0.4909 | 0.8218 | 0.9066 |
| 0.5377 | 14.2979 | 672 | 0.8145 | 0.5045 | 0.8145 | 0.9025 |
| 0.5377 | 14.3404 | 674 | 0.8015 | 0.4767 | 0.8015 | 0.8952 |
| 0.5377 | 14.3830 | 676 | 0.7934 | 0.4859 | 0.7934 | 0.8907 |
| 0.5377 | 14.4255 | 678 | 0.7744 | 0.5102 | 0.7744 | 0.8800 |
| 0.5377 | 14.4681 | 680 | 0.7630 | 0.5222 | 0.7630 | 0.8735 |
| 0.5377 | 14.5106 | 682 | 0.7746 | 0.4992 | 0.7746 | 0.8801 |
| 0.5377 | 14.5532 | 684 | 0.7891 | 0.4778 | 0.7891 | 0.8883 |
| 0.5377 | 14.5957 | 686 | 0.7788 | 0.5054 | 0.7788 | 0.8825 |
| 0.5377 | 14.6383 | 688 | 0.7626 | 0.5059 | 0.7626 | 0.8733 |
| 0.5377 | 14.6809 | 690 | 0.7699 | 0.4879 | 0.7699 | 0.8774 |
| 0.5377 | 14.7234 | 692 | 0.7669 | 0.4793 | 0.7669 | 0.8757 |
| 0.5377 | 14.7660 | 694 | 0.7442 | 0.5149 | 0.7442 | 0.8626 |
| 0.5377 | 14.8085 | 696 | 0.7450 | 0.5319 | 0.7450 | 0.8631 |
| 0.5377 | 14.8511 | 698 | 0.7542 | 0.4870 | 0.7542 | 0.8684 |
| 0.5377 | 14.8936 | 700 | 0.7443 | 0.5157 | 0.7443 | 0.8627 |
| 0.5377 | 14.9362 | 702 | 0.7371 | 0.5286 | 0.7371 | 0.8585 |
| 0.5377 | 14.9787 | 704 | 0.7329 | 0.5122 | 0.7329 | 0.8561 |
| 0.5377 | 15.0213 | 706 | 0.7247 | 0.5286 | 0.7247 | 0.8513 |
| 0.5377 | 15.0638 | 708 | 0.7277 | 0.5052 | 0.7277 | 0.8531 |
| 0.5377 | 15.1064 | 710 | 0.7664 | 0.4578 | 0.7664 | 0.8755 |
| 0.5377 | 15.1489 | 712 | 0.7811 | 0.4593 | 0.7811 | 0.8838 |
| 0.5377 | 15.1915 | 714 | 0.7523 | 0.4727 | 0.7523 | 0.8673 |
| 0.5377 | 15.2340 | 716 | 0.7171 | 0.5299 | 0.7171 | 0.8468 |
| 0.5377 | 15.2766 | 718 | 0.7191 | 0.4873 | 0.7191 | 0.8480 |
| 0.5377 | 15.3191 | 720 | 0.7296 | 0.5128 | 0.7296 | 0.8542 |
| 0.5377 | 15.3617 | 722 | 0.7139 | 0.4873 | 0.7139 | 0.8449 |
| 0.5377 | 15.4043 | 724 | 0.7085 | 0.5129 | 0.7085 | 0.8417 |
| 0.5377 | 15.4468 | 726 | 0.7193 | 0.5287 | 0.7193 | 0.8481 |
| 0.5377 | 15.4894 | 728 | 0.7203 | 0.5361 | 0.7203 | 0.8487 |
| 0.5377 | 15.5319 | 730 | 0.7297 | 0.5028 | 0.7297 | 0.8542 |
| 0.5377 | 15.5745 | 732 | 0.7421 | 0.4908 | 0.7421 | 0.8615 |
| 0.5377 | 15.6170 | 734 | 0.7574 | 0.4908 | 0.7574 | 0.8703 |
| 0.5377 | 15.6596 | 736 | 0.7805 | 0.5146 | 0.7805 | 0.8834 |
| 0.5377 | 15.7021 | 738 | 0.7955 | 0.4907 | 0.7955 | 0.8919 |
| 0.5377 | 15.7447 | 740 | 0.8111 | 0.4952 | 0.8111 | 0.9006 |
| 0.5377 | 15.7872 | 742 | 0.8018 | 0.5083 | 0.8018 | 0.8954 |
| 0.5377 | 15.8298 | 744 | 0.7881 | 0.5008 | 0.7881 | 0.8877 |
| 0.5377 | 15.8723 | 746 | 0.7771 | 0.5003 | 0.7771 | 0.8815 |
| 0.5377 | 15.9149 | 748 | 0.7680 | 0.5153 | 0.7680 | 0.8764 |
| 0.5377 | 15.9574 | 750 | 0.7583 | 0.5062 | 0.7583 | 0.8708 |
| 0.5377 | 16.0 | 752 | 0.7646 | 0.5161 | 0.7646 | 0.8744 |
| 0.5377 | 16.0426 | 754 | 0.7907 | 0.4673 | 0.7907 | 0.8892 |
| 0.5377 | 16.0851 | 756 | 0.7823 | 0.4950 | 0.7823 | 0.8845 |
| 0.5377 | 16.1277 | 758 | 0.7572 | 0.4984 | 0.7572 | 0.8702 |
| 0.5377 | 16.1702 | 760 | 0.7575 | 0.5122 | 0.7575 | 0.8703 |
| 0.5377 | 16.2128 | 762 | 0.7635 | 0.4893 | 0.7635 | 0.8738 |
| 0.5377 | 16.2553 | 764 | 0.7602 | 0.5153 | 0.7602 | 0.8719 |
| 0.5377 | 16.2979 | 766 | 0.7765 | 0.5083 | 0.7765 | 0.8812 |
| 0.5377 | 16.3404 | 768 | 0.7813 | 0.5083 | 0.7813 | 0.8839 |
| 0.5377 | 16.3830 | 770 | 0.7852 | 0.5137 | 0.7852 | 0.8861 |
| 0.5377 | 16.4255 | 772 | 0.7755 | 0.5146 | 0.7755 | 0.8806 |
| 0.5377 | 16.4681 | 774 | 0.7788 | 0.5050 | 0.7788 | 0.8825 |
| 0.5377 | 16.5106 | 776 | 0.7906 | 0.4838 | 0.7906 | 0.8891 |
| 0.5377 | 16.5532 | 778 | 0.8072 | 0.4737 | 0.8072 | 0.8984 |
| 0.5377 | 16.5957 | 780 | 0.8052 | 0.4633 | 0.8052 | 0.8973 |
| 0.5377 | 16.6383 | 782 | 0.7769 | 0.4844 | 0.7769 | 0.8814 |
| 0.5377 | 16.6809 | 784 | 0.7638 | 0.4997 | 0.7638 | 0.8740 |
| 0.5377 | 16.7234 | 786 | 0.7688 | 0.5005 | 0.7688 | 0.8768 |
| 0.5377 | 16.7660 | 788 | 0.7523 | 0.5211 | 0.7523 | 0.8673 |
| 0.5377 | 16.8085 | 790 | 0.7543 | 0.5211 | 0.7543 | 0.8685 |
| 0.5377 | 16.8511 | 792 | 0.7689 | 0.5140 | 0.7689 | 0.8769 |
| 0.5377 | 16.8936 | 794 | 0.8082 | 0.4550 | 0.8082 | 0.8990 |
| 0.5377 | 16.9362 | 796 | 0.8242 | 0.4659 | 0.8242 | 0.9079 |
| 0.5377 | 16.9787 | 798 | 0.7953 | 0.4958 | 0.7953 | 0.8918 |
| 0.5377 | 17.0213 | 800 | 0.7696 | 0.5237 | 0.7696 | 0.8773 |
| 0.5377 | 17.0638 | 802 | 0.7634 | 0.5408 | 0.7634 | 0.8737 |
| 0.5377 | 17.1064 | 804 | 0.7789 | 0.5408 | 0.7789 | 0.8826 |
| 0.5377 | 17.1489 | 806 | 0.7904 | 0.5273 | 0.7904 | 0.8891 |
| 0.5377 | 17.1915 | 808 | 0.7992 | 0.5260 | 0.7992 | 0.8940 |
| 0.5377 | 17.2340 | 810 | 0.7970 | 0.5216 | 0.7970 | 0.8927 |
| 0.5377 | 17.2766 | 812 | 0.8064 | 0.5326 | 0.8064 | 0.8980 |
| 0.5377 | 17.3191 | 814 | 0.8204 | 0.5301 | 0.8204 | 0.9058 |
| 0.5377 | 17.3617 | 816 | 0.8295 | 0.5142 | 0.8295 | 0.9108 |
| 0.5377 | 17.4043 | 818 | 0.8104 | 0.5228 | 0.8104 | 0.9002 |
| 0.5377 | 17.4468 | 820 | 0.7757 | 0.5506 | 0.7757 | 0.8807 |
| 0.5377 | 17.4894 | 822 | 0.7670 | 0.5618 | 0.7670 | 0.8758 |
| 0.5377 | 17.5319 | 824 | 0.7791 | 0.5395 | 0.7791 | 0.8827 |
| 0.5377 | 17.5745 | 826 | 0.7936 | 0.5050 | 0.7936 | 0.8909 |
| 0.5377 | 17.6170 | 828 | 0.7905 | 0.5050 | 0.7905 | 0.8891 |
| 0.5377 | 17.6596 | 830 | 0.7984 | 0.5127 | 0.7984 | 0.8935 |
| 0.5377 | 17.7021 | 832 | 0.8134 | 0.4936 | 0.8134 | 0.9019 |
| 0.5377 | 17.7447 | 834 | 0.8113 | 0.5137 | 0.8113 | 0.9007 |
| 0.5377 | 17.7872 | 836 | 0.7968 | 0.4975 | 0.7968 | 0.8926 |
| 0.5377 | 17.8298 | 838 | 0.7870 | 0.5081 | 0.7870 | 0.8871 |
| 0.5377 | 17.8723 | 840 | 0.7784 | 0.5090 | 0.7784 | 0.8822 |
| 0.5377 | 17.9149 | 842 | 0.7855 | 0.5264 | 0.7855 | 0.8863 |
| 0.5377 | 17.9574 | 844 | 0.8091 | 0.4914 | 0.8091 | 0.8995 |
| 0.5377 | 18.0 | 846 | 0.8290 | 0.4683 | 0.8290 | 0.9105 |
| 0.5377 | 18.0426 | 848 | 0.8389 | 0.5101 | 0.8389 | 0.9159 |
| 0.5377 | 18.0851 | 850 | 0.8354 | 0.4909 | 0.8354 | 0.9140 |
| 0.5377 | 18.1277 | 852 | 0.8279 | 0.4924 | 0.8279 | 0.9099 |
| 0.5377 | 18.1702 | 854 | 0.8196 | 0.4938 | 0.8196 | 0.9053 |
| 0.5377 | 18.2128 | 856 | 0.8165 | 0.4867 | 0.8165 | 0.9036 |
| 0.5377 | 18.2553 | 858 | 0.8024 | 0.4921 | 0.8024 | 0.8957 |
| 0.5377 | 18.2979 | 860 | 0.7713 | 0.5143 | 0.7713 | 0.8782 |
| 0.5377 | 18.3404 | 862 | 0.7464 | 0.5020 | 0.7464 | 0.8640 |
| 0.5377 | 18.3830 | 864 | 0.7408 | 0.5241 | 0.7408 | 0.8607 |
| 0.5377 | 18.4255 | 866 | 0.7478 | 0.5436 | 0.7478 | 0.8647 |
| 0.5377 | 18.4681 | 868 | 0.7464 | 0.5059 | 0.7464 | 0.8639 |
| 0.5377 | 18.5106 | 870 | 0.7534 | 0.5219 | 0.7534 | 0.8680 |
| 0.5377 | 18.5532 | 872 | 0.7671 | 0.5491 | 0.7671 | 0.8758 |
| 0.5377 | 18.5957 | 874 | 0.7612 | 0.5251 | 0.7612 | 0.8725 |
| 0.5377 | 18.6383 | 876 | 0.7650 | 0.4915 | 0.7650 | 0.8746 |
| 0.5377 | 18.6809 | 878 | 0.7679 | 0.4942 | 0.7679 | 0.8763 |
| 0.5377 | 18.7234 | 880 | 0.7663 | 0.5067 | 0.7663 | 0.8754 |
| 0.5377 | 18.7660 | 882 | 0.7980 | 0.4825 | 0.7980 | 0.8933 |
| 0.5377 | 18.8085 | 884 | 0.8253 | 0.4642 | 0.8253 | 0.9085 |
| 0.5377 | 18.8511 | 886 | 0.8137 | 0.4730 | 0.8137 | 0.9020 |
| 0.5377 | 18.8936 | 888 | 0.7835 | 0.4618 | 0.7835 | 0.8852 |
| 0.5377 | 18.9362 | 890 | 0.7677 | 0.5240 | 0.7677 | 0.8762 |
| 0.5377 | 18.9787 | 892 | 0.7647 | 0.5195 | 0.7647 | 0.8745 |
| 0.5377 | 19.0213 | 894 | 0.7676 | 0.4939 | 0.7676 | 0.8761 |
| 0.5377 | 19.0638 | 896 | 0.7866 | 0.4929 | 0.7866 | 0.8869 |
| 0.5377 | 19.1064 | 898 | 0.8114 | 0.4810 | 0.8114 | 0.9008 |
| 0.5377 | 19.1489 | 900 | 0.8100 | 0.4810 | 0.8100 | 0.9000 |
| 0.5377 | 19.1915 | 902 | 0.7928 | 0.4816 | 0.7928 | 0.8904 |
| 0.5377 | 19.2340 | 904 | 0.7549 | 0.5282 | 0.7549 | 0.8689 |
| 0.5377 | 19.2766 | 906 | 0.7458 | 0.5451 | 0.7458 | 0.8636 |
| 0.5377 | 19.3191 | 908 | 0.7556 | 0.5410 | 0.7556 | 0.8693 |
| 0.5377 | 19.3617 | 910 | 0.7725 | 0.4984 | 0.7725 | 0.8789 |
| 0.5377 | 19.4043 | 912 | 0.8041 | 0.4832 | 0.8041 | 0.8967 |
| 0.5377 | 19.4468 | 914 | 0.8474 | 0.4605 | 0.8474 | 0.9205 |
| 0.5377 | 19.4894 | 916 | 0.8342 | 0.4717 | 0.8342 | 0.9134 |
| 0.5377 | 19.5319 | 918 | 0.7772 | 0.5257 | 0.7772 | 0.8816 |
| 0.5377 | 19.5745 | 920 | 0.7400 | 0.5114 | 0.7400 | 0.8602 |
| 0.5377 | 19.6170 | 922 | 0.7288 | 0.5124 | 0.7288 | 0.8537 |
| 0.5377 | 19.6596 | 924 | 0.7259 | 0.5124 | 0.7259 | 0.8520 |
| 0.5377 | 19.7021 | 926 | 0.7348 | 0.5114 | 0.7348 | 0.8572 |
| 0.5377 | 19.7447 | 928 | 0.7491 | 0.5169 | 0.7491 | 0.8655 |
| 0.5377 | 19.7872 | 930 | 0.7308 | 0.5277 | 0.7308 | 0.8549 |
| 0.5377 | 19.8298 | 932 | 0.7274 | 0.5364 | 0.7274 | 0.8529 |
| 0.5377 | 19.8723 | 934 | 0.7311 | 0.4905 | 0.7311 | 0.8551 |
| 0.5377 | 19.9149 | 936 | 0.7486 | 0.4602 | 0.7486 | 0.8652 |
| 0.5377 | 19.9574 | 938 | 0.7583 | 0.4970 | 0.7583 | 0.8708 |
| 0.5377 | 20.0 | 940 | 0.7805 | 0.5395 | 0.7805 | 0.8835 |
| 0.5377 | 20.0426 | 942 | 0.7992 | 0.4571 | 0.7992 | 0.8940 |
| 0.5377 | 20.0851 | 944 | 0.7864 | 0.4828 | 0.7864 | 0.8868 |
| 0.5377 | 20.1277 | 946 | 0.7563 | 0.5337 | 0.7563 | 0.8697 |
| 0.5377 | 20.1702 | 948 | 0.7275 | 0.5349 | 0.7275 | 0.8530 |
| 0.5377 | 20.2128 | 950 | 0.7249 | 0.5448 | 0.7249 | 0.8514 |
| 0.5377 | 20.2553 | 952 | 0.7308 | 0.5436 | 0.7308 | 0.8549 |
| 0.5377 | 20.2979 | 954 | 0.7435 | 0.5364 | 0.7435 | 0.8622 |
| 0.5377 | 20.3404 | 956 | 0.7460 | 0.5364 | 0.7460 | 0.8637 |
| 0.5377 | 20.3830 | 958 | 0.7413 | 0.5355 | 0.7413 | 0.8610 |
| 0.5377 | 20.4255 | 960 | 0.7433 | 0.4901 | 0.7433 | 0.8621 |
| 0.5377 | 20.4681 | 962 | 0.7379 | 0.5244 | 0.7379 | 0.8590 |
| 0.5377 | 20.5106 | 964 | 0.7387 | 0.5491 | 0.7387 | 0.8595 |
| 0.5377 | 20.5532 | 966 | 0.7575 | 0.5465 | 0.7575 | 0.8704 |
| 0.5377 | 20.5957 | 968 | 0.8032 | 0.5070 | 0.8032 | 0.8962 |
| 0.5377 | 20.6383 | 970 | 0.8194 | 0.5070 | 0.8194 | 0.9052 |
| 0.5377 | 20.6809 | 972 | 0.8039 | 0.5356 | 0.8039 | 0.8966 |
| 0.5377 | 20.7234 | 974 | 0.7841 | 0.5465 | 0.7841 | 0.8855 |
| 0.5377 | 20.7660 | 976 | 0.7681 | 0.5562 | 0.7681 | 0.8764 |
| 0.5377 | 20.8085 | 978 | 0.7643 | 0.5392 | 0.7643 | 0.8743 |
| 0.5377 | 20.8511 | 980 | 0.7563 | 0.5392 | 0.7563 | 0.8697 |
| 0.5377 | 20.8936 | 982 | 0.7434 | 0.5476 | 0.7434 | 0.8622 |
| 0.5377 | 20.9362 | 984 | 0.7383 | 0.5491 | 0.7383 | 0.8592 |
| 0.5377 | 20.9787 | 986 | 0.7391 | 0.5448 | 0.7391 | 0.8597 |
| 0.5377 | 21.0213 | 988 | 0.7437 | 0.5491 | 0.7437 | 0.8624 |
| 0.5377 | 21.0638 | 990 | 0.7770 | 0.5185 | 0.7770 | 0.8815 |
| 0.5377 | 21.1064 | 992 | 0.8325 | 0.4794 | 0.8325 | 0.9124 |
| 0.5377 | 21.1489 | 994 | 0.8420 | 0.4608 | 0.8420 | 0.9176 |
| 0.5377 | 21.1915 | 996 | 0.8111 | 0.4834 | 0.8111 | 0.9006 |
| 0.5377 | 21.2340 | 998 | 0.7920 | 0.5405 | 0.7920 | 0.8900 |
| 0.1255 | 21.2766 | 1000 | 0.7846 | 0.5248 | 0.7846 | 0.8858 |
| 0.1255 | 21.3191 | 1002 | 0.7696 | 0.5041 | 0.7696 | 0.8773 |
| 0.1255 | 21.3617 | 1004 | 0.7710 | 0.4859 | 0.7710 | 0.8781 |
| 0.1255 | 21.4043 | 1006 | 0.7660 | 0.5045 | 0.7660 | 0.8752 |
| 0.1255 | 21.4468 | 1008 | 0.7661 | 0.5280 | 0.7661 | 0.8753 |
| 0.1255 | 21.4894 | 1010 | 0.7822 | 0.5392 | 0.7822 | 0.8844 |
| 0.1255 | 21.5319 | 1012 | 0.7924 | 0.4966 | 0.7924 | 0.8902 |
| 0.1255 | 21.5745 | 1014 | 0.7748 | 0.5248 | 0.7748 | 0.8802 |
| 0.1255 | 21.6170 | 1016 | 0.7776 | 0.4738 | 0.7776 | 0.8818 |
| 0.1255 | 21.6596 | 1018 | 0.7847 | 0.4842 | 0.7847 | 0.8858 |
| 0.1255 | 21.7021 | 1020 | 0.7953 | 0.5064 | 0.7953 | 0.8918 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
tensorblock/merlinite-7b-lab-GGUF | tensorblock | "2024-11-18T16:38:02Z" | 30 | 0 | null | [
"gguf",
"merlinite",
"mistral",
"ibm",
"lab",
"labrador",
"labradorite",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:instructlab/merlinite-7b-lab",
"base_model:quantized:instructlab/merlinite-7b-lab",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-18T16:07:12Z" | ---
pipeline_tag: text-generation
tags:
- merlinite
- mistral
- ibm
- lab
- labrador
- labradorite
- TensorBlock
- GGUF
license: apache-2.0
language:
- en
base_model: instructlab/merlinite-7b-lab
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## instructlab/merlinite-7b-lab - GGUF
This repo contains GGUF format model files for [instructlab/merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|system|>
{system_prompt}
<|user|>
{prompt}
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [merlinite-7b-lab-Q2_K.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [merlinite-7b-lab-Q3_K_S.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [merlinite-7b-lab-Q3_K_M.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [merlinite-7b-lab-Q3_K_L.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [merlinite-7b-lab-Q4_0.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [merlinite-7b-lab-Q4_K_S.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [merlinite-7b-lab-Q4_K_M.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [merlinite-7b-lab-Q5_0.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q5_0.gguf) | Q5_0 | 4.655 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [merlinite-7b-lab-Q5_K_S.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q5_K_S.gguf) | Q5_K_S | 4.655 GB | large, low quality loss - recommended |
| [merlinite-7b-lab-Q5_K_M.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [merlinite-7b-lab-Q6_K.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [merlinite-7b-lab-Q8_0.gguf](https://huggingface.co/tensorblock/merlinite-7b-lab-GGUF/blob/main/merlinite-7b-lab-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/merlinite-7b-lab-GGUF --include "merlinite-7b-lab-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/merlinite-7b-lab-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
lesso/a47a10ae-be71-4954-bf5b-ac66f498ee8c | lesso | "2025-02-05T15:00:46Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | "2025-02-05T14:35:03Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a47a10ae-be71-4954-bf5b-ac66f498ee8c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 61d1cb68685ed607_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/61d1cb68685ed607_train_data.json
type:
field_instruction: topic
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/a47a10ae-be71-4954-bf5b-ac66f498ee8c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001009
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/61d1cb68685ed607_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ec3b5dbf-c494-49fc-a204-346ac39c4703
wandb_project: new-09
wandb_run: your_name
wandb_runid: ec3b5dbf-c494-49fc-a204-346ac39c4703
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a47a10ae-be71-4954-bf5b-ac66f498ee8c
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001009
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0568 | 0.0008 | 1 | 3.3034 |
| 4.4345 | 0.0416 | 50 | 2.4453 |
| 5.0985 | 0.0831 | 100 | 2.3602 |
| 4.1653 | 0.1247 | 150 | 2.2976 |
| 4.02 | 0.1663 | 200 | 2.2976 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sakalti/Qwen2.5-test-1 | Sakalti | "2025-01-13T00:46:14Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:Sakalti/Saba1-1.8B",
"base_model:finetune:Sakalti/Saba1-1.8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-12T23:50:18Z" | ---
base_model:
- Sakalti/Saba1-1.8B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
inference: true
language:
- en
widget:
- messages:
- role: user
content: こんにちは!
- messages:
- role: user
content: ドラゴンフルーツは何科ですか?
- messages:
- role: user
content: hello!
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Sakalti/Saba1-1.8B](https://huggingface.co/Sakalti/Saba1-1.8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 28]
model: Sakalti/Saba1-1.8B
- sources:
- layer_range: [20, 28]
model: Sakalti/Saba1-1.8B
merge_method: passthrough
dtype: float16
```
|
danielkosyra/cosine_with_restarts_2000_9e-4_16b_w0.08 | danielkosyra | "2024-07-03T21:01:54Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-03T21:01:27Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: cosine_with_restarts_2000_9e-4_16b_w0.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cosine_with_restarts_2000_9e-4_16b_w0.08
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.293 | 0.7930 | 250 | 4.7425 |
| 4.0871 | 1.5860 | 500 | 3.5106 |
| 3.275 | 2.3791 | 750 | 3.1563 |
| 2.967 | 3.1721 | 1000 | 2.9887 |
| 2.7476 | 3.9651 | 1250 | 2.8838 |
| 2.5287 | 4.7581 | 1500 | 2.8292 |
| 2.3976 | 5.5511 | 1750 | 2.8038 |
| 2.3199 | 6.3442 | 2000 | 2.7986 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
ridhism/Hridhay | ridhism | "2024-03-11T04:57:58Z" | 0 | 0 | null | [
"en",
"dataset:HuggingFaceTB/cosmopedia",
"license:apache-2.0",
"region:us"
] | null | "2024-03-11T04:55:10Z" | ---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
metrics:
- accuracy
--- |
soyanagomez/raquel | soyanagomez | "2025-01-15T11:44:43Z" | 24 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-15T10:39:01Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Raquel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('soyanagomez/raquel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
satvikahuja/mixer_on_off_10e | satvikahuja | "2025-01-14T13:58:49Z" | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | "2025-01-14T13:57:14Z" | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF | mradermacher | "2024-07-28T06:55:02Z" | 181 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored",
"base_model:quantized:Orenguteng/Llama-3.1-8B-Lexi-Uncensored",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-27T14:22:41Z" | ---
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Lexi-Uncensored-GGUF/resolve/main/Llama-3.1-8B-Lexi-Uncensored.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CausalLM/14B | CausalLM | "2023-12-10T01:21:49Z" | 2,852 | 300 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:fnlp/moss-003-sft-data",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:openbmb/llava_zh",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"license:wtfpl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-22T11:19:12Z" | ---
license: wtfpl
datasets:
- JosephusCheung/GuanacoDataset
- Open-Orca/OpenOrca
- stingning/ultrachat
- meta-math/MetaMathQA
- liuhaotian/LLaVA-Instruct-150K
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- fnlp/moss-003-sft-data
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- openbmb/llava_zh
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
language:
- en
- zh
pipeline_tag: text-generation
tags:
- llama
- llama2
- qwen
- causallm
---
[](https://causallm.org/)
*Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...**
# CausalLM 14B - Fully Compatible with Meta LLaMA 2
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
**News: DPO ver. Rank #1 ~13B - SOTA model of its size on 🤗 Open LLM Leaderboard**
# Recent Updates: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) outperforms Zephyr-β in MT-Bench
# Friendly reminder: If your VRAM is insufficient, you should use the 7B model instead of the quantized version.
Compared to the quantized versions, the 7B version and the 14B version demonstrate a high level of consistency.
**llama.cpp GGUF models**
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are now reuploaded.
Thanks TheBloke for GGUF quants: [https://huggingface.co/TheBloke/CausalLM-14B-GGUF](https://huggingface.co/TheBloke/CausalLM-14B-GGUF)
**Caution:** Unofficial GPTQ and AWQ models may have issues as they use Wikitext for calibration, while this model has undergone considerable training on a synthesized Wikipedia conversation dataset.
It is not recommended to use any form of quantization, but rather to use smaller-sized models, as the 7B and 14B versions have high consistency. However, if you do use model quantization, please use GGUF.
# Read Me:
Also see [7B Version](https://huggingface.co/CausalLM/7B)
This model was trained based on the model weights of Qwen (and LLaMA2 was used, yes, for calculating some initial weights), you may also need to comply with the commercial use restrictions of these two models depending on the situation. The training process utilized a model architecture that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Rotary Positional Encoding (RoPE).
We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
The 7B version of the model is a distilled version of the 14B model, specifically designed for speculative sampling. Therefore, it is important to exercise caution when directly using the model, as it may produce hallucinations or unreliable outputs.
Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LM under visual instructions would enable rapid implementation of effective multimodal capabilities.
## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
**System Prompt must not be empty!**
## MMLU:
stem ACC: 64.19
Humanities ACC: 61.40
other ACC: 71.64
social ACC: 75.37
**AVERAGE ACC:67.36** (Outperforms ALL models under 70B, very close to those best 70B fine-tunes)
## CEval (Val):
STEM ACC: 66.71
Social Science ACC: 85.10
Humanities ACC: 76.68
Other ACC: 70.23
Hard ACC:54.71
**AVERAGE ACC:73.10** (Outperforms Qwen-14B, and GPT-4)
## GSM8K
**Zero-shot ACC 0.7012888551933283** (Outperforms MetaMath-13B, Qwen-14B)
## AlpacaEval Leaderboard
| | win_rate | standard_error | n_wins | n_wins_base | n_draws | n_total | mode | avg_length |
| ------------ | -------- | -------------- | ------ | ----------- | ------- | ------- | --------- | ---------- |
| causallm-14b | **88.26087** | 1.116333 | 705 | 89 | 11 | 805 | community | 1391 |
Win rate **88.26%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) [view raw](https://github.com/tatsu-lab/alpaca_eval/blob/3a47dcd81c56f6a8e6a5711f2754013919fbe90a/results/causallm-14b/model_outputs.json)
## MT-Behch on DPO Version
| Model | MT-Bench |
| ------------------------- | ------------ |
| GPT-4 | 8.99 |
| GPT-3.5-Turbo | 7.94 |
| | |
| Zephyr-7b-β (Overfitting) | 7.34 |
| Zephyr-7b-α | 6.88 |
| | |
| **[CausalLM/14B-DPO-α](https://huggingface.co/CausalLM/14B-DPO-alpha)** | **7.618868** |
| **[CausalLM/7B-DPO-α](https://huggingface.co/CausalLM/7B-DPO-alpha)** | **7.038125** |
## Other languages
We are currently unable to produce accurate benchmark templates for non-QA tasks (languages other than English and Chinese). However, we will be working on other language versions of the QA-Task challenge in the near future.
### Japanese Benchmark
| Task |Version| Metric |Value | |Stderr|
|----------------------|------:|--------|-----:|---|-----:|
|jcommonsenseqa-1.1-0.6| 1.1|acc |0.8213|± |0.0115|
*JCommonsenseQA benchmark result is very, very close to [Japanese Stable LM Gamma 7B (83.47)](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable), current SOTA Japanese LM. However, our model was not trained on a particularly large amount of text in Japanese. This seems to reflect the cross-language transferability of metalinguistics.*
## 🤗 Open LLM Leaderboard
SOTA chat model of its size on 🤗 Open LLM Leaderboard.
Dec 3, 2023
DPO Version Rank **#1** non-base model, of its size on 🤗 Open LLM Leaderboard, outperforms **ALL** ~13B chat models.

# 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
# 新消息:DPO 版本在~13B排名第1 🤗 Open LLM 排行榜上同尺寸的所有模型中评分最高
# 最近更新: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) 在 MT-Bench 超过 Zephyr-β
# 友情提示:如果您的显存不足,您应该使用7B模型而不是量化版本。
与量化版本相比,7B 版本和 14B 版本具有高度的一致性。
**llama.cpp GGUF models**
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
感谢 TheBloke 制作 GGUF 版本量化模型: [https://huggingface.co/TheBloke/CausalLM-14B-GGUF](https://huggingface.co/TheBloke/CausalLM-14B-GGUF)
**注意:** 非官方 GPTQ 和 AWQ 模型可能存在问题,因为它们使用 Wikitext 进行校准,而该模型已经在合成的 Wikipedia 对话数据集上经过了大量的训练。
不建议使用任何形式的量化,而是使用较小尺寸的模型,因为7B和14B版本具有较高的一致性。 但是,如果您确实使用模型量化,请使用 GGUF。
## 请读我:
另请参阅[7B版本](https://huggingface.co/CausalLM/7B)
该模型是基于Qwen的权重(并使用了LLaMA2权重,是的,用于计算一些权重初始化),您根据情况可能还需要遵守这两个模型的商业使用限制。训练过程中使用了与LLaMA2相同的模型结构,使用原始MHA LLaMA2模型的相同注意力计算方法,对旋转位置编码(RoPE)没有进行额外的缩放。
我们手动筛选了一个包含13亿个标记的SFT数据集进行训练,利用了Hugging Face的开源数据集。对于大多数句子,我们进行了手动或合成改写,并使用更大的语言模型生成了其他语言版本。此外,我们还使用了精心挑选的来自维基百科的条目、来自Fandom的精选条目以及来自萌娘百科的过滤条目进行增强文本训练。为了在效率和质量之间取得平衡,训练所使用的100%数据都是合成数据,没有直接使用来自互联网或公开可用数据集的原始文本进行微调。
7B版本的模型是14B模型的精简版本,专门设计用于推测抽样。因此,在直接使用模型时,需要谨慎行事,因为它可能会产生幻觉或不可靠的输出。
请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。
额外奖励:模型在LLaVA1.5中引入的提示格式上进行了一些微调,与图像注意力计算无关。因此,将ViT投影模块与冻结的LM对齐,并根据视觉指令实施快速实现有效的多模态能力。
## 提示格式:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
**系统提示不能为空!**
## MMLU:
STEM准确率:64.19
人文及艺术学科准确率:61.40
其他学科准确率:71.64
社会学科准确率:75.37
**平均准确率:67.36**(超过所有70B以下的模型,非常接近最佳70B微调模型)
## CEval(验证集):
STEM准确率:66.71
社会科学准确率:85.10
人文学科准确率:76.68
其他学科准确率:70.23
困难准确率:54.71
**平均准确率:73.10**(超过Qwen-14B和GPT-4)
## GSM8K
**零样本准确率0.7012888551933283**(超过MetaMath-13B和Qwen-14B)
## AlpacaEval Leaderboard
| | win_rate | standard_error | n_wins | n_wins_base | n_draws | n_total | mode | avg_length |
| ------------ | -------- | -------------- | ------ | ----------- | ------- | ------- | --------- | ---------- |
| causallm-14b | **88.26087** | 1.116333 | 705 | 89 | 11 | 805 | community | 1391 |
在 [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) 胜率 **88.26%** [view raw](https://github.com/tatsu-lab/alpaca_eval/blob/3a47dcd81c56f6a8e6a5711f2754013919fbe90a/results/causallm-14b/model_outputs.json)
## DPO 版本的 MT-Behch
| Model | MT-Bench |
| ------------------------- | ------------ |
| GPT-4 | 8.99 |
| GPT-3.5-Turbo | 7.94 |
| | |
| Zephyr-7b-β (Overfitting) | 7.34 |
| Zephyr-7b-α | 6.88 |
| | |
| **[CausalLM/14B-DPO-α](https://huggingface.co/CausalLM/14B-DPO-alpha)** | **7.618868** |
| **[CausalLM/7B-DPO-α](https://huggingface.co/CausalLM/7B-DPO-alpha)** | **7.038125** |
## 其他语言
我们目前无法为非 QA 任务(英语和中文以外的语言)生成准确的基准模板。 不过,我们将在不久的将来开发其他语言版本的 QA-Task 挑战。
### 日文基准
| Task |Version| Metric |Value | |Stderr|
|----------------------|------:|--------|-----:|---|-----:|
|jcommonsenseqa-1.1-0.6| 1.1|acc |0.8213|± |0.0115|
*JCommonsenseQA 基准测试结果非常非常接近 [Japanese Stable LM Gamma 7B (83.47)](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable),当前 SOTA 日文 LM 。然而,我们的模型并未在日文上进行特别的大量文本训练。这似乎能体现元语言的跨语言迁移能力。*
## 🤗 Open LLM 排行榜
Dec 3, 2023
DPO版本在🤗 Open LLM 排行榜上~13B的**所有**聊天模型中**排名第1**
 |
mHossain/mt5-base-bangla-para-v1 | mHossain | "2023-04-24T19:01:56Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-24T16:36:49Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-bangla-para-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-bangla-para-v1
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2349
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.3042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8155 | 1.0 | 5250 | 1.2976 | 0.0 | 0.0 | 0.0 | 0.0 | 18.2241 |
| 1.6611 | 2.0 | 10500 | 1.2349 | 0.0 | 0.0 | 0.0 | 0.0 | 18.3042 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
JeswinMS4/finetuned-llama-2 | JeswinMS4 | "2023-09-28T05:24:52Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-28T05:24:50Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
litagin/chupa-gpt2 | litagin | "2024-05-19T10:05:35Z" | 132 | 3 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-19T10:03:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ClarenceDan/a28228af-88cb-4dfb-aa7f-00c49ca6d0df | ClarenceDan | "2025-01-14T22:43:52Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T22:42:37Z" | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a28228af-88cb-4dfb-aa7f-00c49ca6d0df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83eb04d6d5cd887f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83eb04d6d5cd887f_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/a28228af-88cb-4dfb-aa7f-00c49ca6d0df
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/83eb04d6d5cd887f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 76fc2a1b-51c3-4a84-a409-359a70661867
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 76fc2a1b-51c3-4a84-a409-359a70661867
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a28228af-88cb-4dfb-aa7f-00c49ca6d0df
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6865 | 0.0022 | 1 | nan |
| 1.8509 | 0.0067 | 3 | nan |
| 0.0 | 0.0135 | 6 | nan |
| 2.2117 | 0.0202 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/swagjacklsx | LHRuig | "2025-04-13T23:06:07Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-04-13T23:04:44Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: swagjacklsx
---
# swagjacklsx
<Gallery />
## Model description
swagjacklsx lora
## Trigger words
You should use `swagjacklsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/swagjacklsx/tree/main) them in the Files & versions tab.
|
hoanganh686/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_pudgy_ibis | hoanganh686 | "2025-04-10T03:05:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lazy pudgy ibis",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-09T04:45:40Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_pudgy_ibis
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lazy pudgy ibis
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_pudgy_ibis
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hoanganh686/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_pudgy_ibis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits