modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dbaek111/Mistral-7B-v0.2-Elon_500-instruct | dbaek111 | 2024-05-14T09:54:31Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T09:51:04Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ariakhosh/adapter1 | ariakhosh | 2024-05-14T09:54:19Z | 0 | 0 | null | [
"safetensors",
"arxiv:2305.14314",
"arxiv:2302.13971",
"region:us"
] | null | 2024-05-14T09:53:06Z | # QLoRA Instruction Tuned Models
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The `QLoRA Instruction Tuned Models` are open-source models obtained through 4-bit QLoRA tuning of LLaMA base models on various instruction tuning datasets. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
**Note: The best performing chatbot models are named [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) and finetuned on OASST1. This model card is for the other models finetuned on other instruction tuning datasets.**
⚠️ These models are purely intended for research purposes and could produce problematic outputs.
## What are QLoRA Instruction Tuned Models and why use them?
- **Strong performance on MMLU** following the QLoRA instruction tuning.
- **Replicable and efficient instruction tuning procedure** that can be extended to new use cases. QLoRA training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
QLoRA Instruction Tuned adapter weights are available under Apache 2 license. Note the use of these adapter weights, requires access to the LLaMA model weighs and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Flan v2 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/qlora-flan-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/qlora-flan-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The models released here are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: These models use LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that these models can inherit biases and limitations of the base model.
**Finetuning Data**: These models are finetuned on various instruction tuning datasets. The datasets used are: Alpaca, HH-RLHF, Unnatural Instr., Chip2, Longform, Self-Instruct, FLAN v2.
**Languages**: The different datasets cover different languages. We direct to the various papers and resources describing the datasets for more details.
Next, we describe Training and Evaluation details.
### Training
QLoRA Instruction Tuned Models are the result of 4-bit QLoRA supervised finetuning on different instruction tuning datasets.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
| Parameters | Dataset | Batch size | LR | Steps | Source Length | Target Length |
|------------|----------|------------|------|-------|---------------|---------------|
| 7B | All | 16 | 2e-4 | 10000 | 384 | 128 |
| 7B | OASST1 | 16 | 2e-4 | 1875 | - | 512 |
| 7B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 |
| 7B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 |
| 13B | All | 16 | 2e-4 | 10000 | 384 | 128 |
| 13B | OASST1 | 16 | 2e-4 | 1875 | - | 512 |
| 13B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 |
| 13B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 |
| 33B | All | 32 | 1e-4 | 5000 | 384 | 128 |
| 33B | OASST1 | 16 | 1e-4 | 1875 | - | 512 |
| 33B | HH-RLHF | 32 | 1e-4 | 5000 | - | 768 |
| 33B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 |
| 65B | All | 64 | 1e-4 | 2500 | 384 | 128 |
| 65B | OASST1 | 16 | 1e-4 | 1875 | - | 512 |
| 65B | HH-RLHF | 64 | 1e-4 | 2500 | - | 768 |
| 65B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 |
### Evaluation
We use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
We evaluate the generative language capabilities through automated evaluations on the Vicuna benchmark. We report the score of the QLoRA Instruction Finetuned Models relative to the score obtained by ChatGPT. The rater in this case is GPT-4 which is tasked to assign a score out of 10 to both ChatGPT and the model outputs for each prompt. We report scores for models ranging 7B to 65B and compare them to both academic and commercial baselilnes.
| Model / Dataset | Params | Model bits | Memory | ChatGPT vs Sys | Sys vs ChatGPT | Mean | 95\% CI |
|------------------|--------|------------|--------|----------------|----------------|------------------|---------|
| GPT-4 | - | - | - | 119.4\% | 110.1\% | **114.5**\% | 2.6\% |
| Bard | - | - | - | 93.2\% | 96.4\% | 94.8\% | 4.1\% |
| Guanaco | 65B | 4-bit | 41 GB | 96.7\% | 101.9\% | **99.3**\% | 4.4\% |
| Alpaca | 65B | 4-bit | 41 GB | 63.0\% | 77.9\% | 70.7\% | 4.3\% |
| FLAN v2 | 65B | 4-bit | 41 GB | 37.0\% | 59.6\% | 48.4\% | 4.6\% |
| Guanaco | 33B | 4-bit | 21 GB | 96.5\% | 99.2\% | **97.8**\% | 4.4\% |
| Open Assistant | 33B | 16-bit | 66 GB | 73.4\% | 85.7\% | 78.1\% | 5.3\% |
| Alpaca | 33B | 4-bit | 21 GB | 67.2\% | 79.7\% | 73.6\% | 4.2\% |
| FLAN v2 | 33B | 4-bit | 21 GB | 26.3\% | 49.7\% | 38.0\% | 3.9\% |
| Vicuna | 13B | 16-bit | 26 GB | 91.2\% | 98.7\% | **94.9**\% | 4.5\% |
| Guanaco | 13B | 4-bit | 10 GB | 87.3\% | 93.4\% | 90.4\% | 5.2\% |
| Alpaca | 13B | 4-bit | 10 GB | 63.8\% | 76.7\% | 69.4\% | 4.2\% |
| HH-RLHF | 13B | 4-bit | 10 GB | 55.5\% | 69.1\% | 62.5\% | 4.7\% |
| Unnatural Instr. | 13B | 4-bit | 10 GB | 50.6\% | 69.8\% | 60.5\% | 4.2\% |
| Chip2 | 13B | 4-bit | 10 GB | 49.2\% | 69.3\% | 59.5\% | 4.7\% |
| Longform | 13B | 4-bit | 10 GB | 44.9\% | 62.0\% | 53.6\% | 5.2\% |
| Self-Instruct | 13B | 4-bit | 10 GB | 38.0\% | 60.5\% | 49.1\% | 4.6\% |
| FLAN v2 | 13B | 4-bit | 10 GB | 32.4\% | 61.2\% | 47.0\% | 3.6\% |
| Guanaco | 7B | 4-bit | 5 GB | 84.1\% | 89.8\% | **87.0**\% | 5.4\% |
| Alpaca | 7B | 4-bit | 5 GB | 57.3\% | 71.2\% | 64.4\% | 5.0\% |
| FLAN v2 | 7B | 4-bit | 5 GB | 33.3\% | 56.1\% | 44.8\% | 4.0\% |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
``` |
Litzy619/G0513HMA4H | Litzy619 | 2024-05-14T09:50:37Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T09:02:56Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA4H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA4H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1824 | 0.09 | 10 | 2.8991 |
| 2.6535 | 0.18 | 20 | 2.2337 |
| 1.8658 | 0.27 | 30 | 1.4097 |
| 1.0771 | 0.36 | 40 | 0.6675 |
| 0.4164 | 0.45 | 50 | 0.2215 |
| 0.1854 | 0.54 | 60 | 0.1678 |
| 0.1586 | 0.63 | 70 | 0.1549 |
| 0.153 | 0.73 | 80 | 0.1504 |
| 0.1434 | 0.82 | 90 | 0.1510 |
| 0.1463 | 0.91 | 100 | 0.1488 |
| 0.1487 | 1.0 | 110 | 0.1499 |
| 0.1439 | 1.09 | 120 | 0.1488 |
| 0.1454 | 1.18 | 130 | 0.1481 |
| 0.1456 | 1.27 | 140 | 0.1468 |
| 0.148 | 1.36 | 150 | 0.1459 |
| 0.1426 | 1.45 | 160 | 0.1489 |
| 0.1441 | 1.54 | 170 | 0.1468 |
| 0.1447 | 1.63 | 180 | 0.1448 |
| 0.1456 | 1.72 | 190 | 0.1494 |
| 0.1454 | 1.81 | 200 | 0.1461 |
| 0.1448 | 1.9 | 210 | 0.1451 |
| 0.1454 | 1.99 | 220 | 0.1436 |
| 0.1406 | 2.08 | 230 | 0.1407 |
| 0.136 | 2.18 | 240 | 0.1395 |
| 0.1345 | 2.27 | 250 | 0.1406 |
| 0.1392 | 2.36 | 260 | 0.1384 |
| 0.1356 | 2.45 | 270 | 0.1367 |
| 0.1343 | 2.54 | 280 | 0.1357 |
| 0.1313 | 2.63 | 290 | 0.1344 |
| 0.13 | 2.72 | 300 | 0.1331 |
| 0.1356 | 2.81 | 310 | 0.1330 |
| 0.1338 | 2.9 | 320 | 0.1330 |
| 0.1323 | 2.99 | 330 | 0.1331 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Mag0g/Ezekiel26_14 | Mag0g | 2024-05-14T09:49:13Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T09:48:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ankesh1234/gemma_finetuned_medical | Ankesh1234 | 2024-05-14T09:47:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-05-14T09:46:23Z | ---
library_name: peft
base_model: google/gemma-2b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
terry69/mistral_poe_20 | terry69 | 2024-05-14T09:47:32Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T08:07:53Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral_poe_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_poe_20
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1 | 1.0 | 325 | nan |
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
NexusNinja/wikisql-4bit-1k | NexusNinja | 2024-05-14T09:44:10Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"pretrained",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-14T09:42:02Z | ---
language:
- en
license: apache-2.0
tags:
- pretrained
- mlx
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
---
# colombox/wikisql-4bit-1k
The Model [colombox/wikisql-4bit-1k](https://huggingface.co/colombox/wikisql-4bit-1k) was converted to MLX format from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using mlx-lm version **0.13.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("colombox/wikisql-4bit-1k")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
OsherElhadad/a2c-PandaReachDense-v3 | OsherElhadad | 2024-05-14T09:41:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T09:37:21Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Zetsubou99/distilgpt2-finetuned-wikitext2 | Zetsubou99 | 2024-05-14T09:40:30Z | 213 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T09:13:58Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7501 | 1.0 | 2334 | 3.6669 |
| 3.6498 | 2.0 | 4668 | 3.6464 |
| 3.6023 | 3.0 | 7002 | 3.6420 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_4 | ShenaoZhang | 2024-05-14T09:38:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_3",
"base_model:finetune:ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:52:10Z | ---
license: mit
base_model: ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_3
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_zephyr_5551_4iters_bs256_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_zephyr_5551_4iters_bs256_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_3](https://huggingface.co/ShenaoZhang/0.0001_zephyr_5551_4iters_bs256_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
kyl23/hw3_RTE_lora_1e-2 | kyl23 | 2024-05-14T09:37:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T09:37:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blackkevin/first_finetune | blackkevin | 2024-05-14T09:34:51Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T20:06:07Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** blackkevin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
iarkh/donut-demo | iarkh | 2024-05-14T09:18:12Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-05T14:55:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DalHyun/donut-base-contigo | DalHyun | 2024-05-14T09:17:08Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-13T11:37:22Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-contigo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-contigo
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.12.0
- Tokenizers 0.19.1
|
Litzy619/G0513HMA25H | Litzy619 | 2024-05-14T09:14:37Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T08:00:20Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA25H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA25H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1589 | 0.09 | 10 | 2.7634 |
| 2.422 | 0.18 | 20 | 1.8450 |
| 1.3644 | 0.27 | 30 | 0.8162 |
| 0.4569 | 0.36 | 40 | 0.1885 |
| 0.17 | 0.45 | 50 | 0.1622 |
| 0.1549 | 0.54 | 60 | 0.1520 |
| 0.1502 | 0.63 | 70 | 0.1508 |
| 0.1525 | 0.73 | 80 | 0.1485 |
| 0.1547 | 0.82 | 90 | 0.1488 |
| 0.1467 | 0.91 | 100 | 0.1482 |
| 0.1483 | 1.0 | 110 | 0.1482 |
| 0.1434 | 1.09 | 120 | 0.1475 |
| 0.1437 | 1.18 | 130 | 0.1492 |
| 0.1427 | 1.27 | 140 | 0.1385 |
| 0.1412 | 1.36 | 150 | 0.1381 |
| 0.1351 | 1.45 | 160 | 0.1341 |
| 0.1334 | 1.54 | 170 | 0.1312 |
| 0.1321 | 1.63 | 180 | 0.1271 |
| 0.1329 | 1.72 | 190 | 0.1333 |
| 0.1298 | 1.81 | 200 | 0.1244 |
| 0.1278 | 1.9 | 210 | 0.1251 |
| 0.1277 | 1.99 | 220 | 0.1219 |
| 0.1163 | 2.08 | 230 | 0.1188 |
| 0.1154 | 2.18 | 240 | 0.1200 |
| 0.1136 | 2.27 | 250 | 0.1185 |
| 0.117 | 2.36 | 260 | 0.1167 |
| 0.1147 | 2.45 | 270 | 0.1159 |
| 0.1084 | 2.54 | 280 | 0.1149 |
| 0.1077 | 2.63 | 290 | 0.1130 |
| 0.1098 | 2.72 | 300 | 0.1119 |
| 0.1125 | 2.81 | 310 | 0.1115 |
| 0.1124 | 2.9 | 320 | 0.1114 |
| 0.1119 | 2.99 | 330 | 0.1114 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
SageLiao/llama3-LlamaFactory-demo-v2 | SageLiao | 2024-05-14T09:14:34Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T09:09:08Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LujainAbdulrahman/llama3-lora-AE-3 | LujainAbdulrahman | 2024-05-14T09:13:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T09:13:17Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** LujainAbdulrahman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saucam/aqua-qwen-0.1-110B | saucam | 2024-05-14T09:10:54Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2311.03099",
"base_model:Qwen/Qwen1.5-110B-Chat",
"base_model:merge:Qwen/Qwen1.5-110B-Chat",
"base_model:cognitivecomputations/dolphin-2.9.1-qwen-110b",
"base_model:merge:cognitivecomputations/dolphin-2.9.1-qwen-110b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:13:36Z | ---
base_model:
- cognitivecomputations/dolphin-2.9.1-qwen-110b
- Qwen/Qwen1.5-110B-Chat
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---

## aqua-qwen-0.1-110B
This model was created by merging 2 models using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method
using [mergekit](https://github.com/arcee-ai/mergekit).
The following models were included in the merge:
- [cognitivecomputations/dolphin-2.9.1-qwen-110b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-qwen-110b) as a base.
- [Qwen/Qwen1.5-110B-Chat](https://huggingface.co/Qwen/Qwen1.5-110B-Chat)
## Configuration
The following YAML configuration was used to produce this model:
```yaml
name: aqua-qwen-0.1-110B
base_model:
model:
path: cognitivecomputations/dolphin-2.9.1-qwen-110b
dtype: bfloat16
merge_method: dare_linear
parameters:
normalize: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.9.1-qwen-110b
layer_range: [0, 80]
parameters:
weight: 0.6
- model: Qwen/Qwen1.5-110B-Chat
layer_range: [0, 80]
parameters:
weight: 0.4
```
## Usage
It is recommended to use GGUF version of the model [available here](https://huggingface.co/saucam/aqua-qwen-0.1-110B-GGUF/blob/main/README.md) |
serjtroshin/finetuned_gpt2_toxic | serjtroshin | 2024-05-14T09:10:10Z | 144 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T09:06:24Z | ---
license: apache-2.0
---
|
quangtqv/bi_encoder_tool_learning_14_5_2024_v8 | quangtqv | 2024-05-14T09:08:57Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-14T09:08:44Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# quangtqv/bi_encoder_tool_learning_14_5_2024_v8
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v8')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v8)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
saucam/aqua-qwen-0.1-110B-GGUF | saucam | 2024-05-14T09:07:11Z | 12 | 0 | null | [
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T07:11:29Z | ---
license: apache-2.0
language:
- en
---
This is GGUF model of [saucam/aqua-qwen-0.1-110B](https://huggingface.co/saucam/aqua-qwen-0.1-110B)
## Usage
Download the 2 files and merge using [llama.cpp](https://github.com/ggerganov/llama.cpp).
```
gguf-split --merge aqua-qwen-0.1-110B-Q4_K_M-00001-of-00002.gguf aqua-qwen-0.1-110B-Q4_K_M.gguf
```
Then use the single generated file like below:
```
$ ./main -m aqua-qwen-0.1-110B-Q4_K_M.gguf -p "<|im_start|>user\nHow are you?<|im_end|>\n<|im_start|>assistant" -n 400 -e
Log start
main: build = 2874 (e0f55618)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1715672499
llama_model_loader: loaded meta data with 20 key-value pairs and 963 tensors from aqua-qwen-0.1-110B
-Q4_K_M.gguf (version GGUF V3 (latest))
...
...
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = 400, n_keep = 0
,<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I am an AI, I do not have feelings. How can I assist you?<|im_end|> [end of text]
llama_print_timings: load time = 4065.12 ms
llama_print_timings: sample time = 1.70 ms / 19 runs ( 0.09 ms per token, 11150.23 tokens per second)
llama_print_timings: prompt eval time = 2898.40 ms / 12 tokens ( 241.53 ms per token, 4.14 tokens per second)
llama_print_timings: eval time = 178067.55 ms / 18 runs ( 9892.64 ms per token, 0.10 tokens per second)
llama_print_timings: total time = 181014.78 ms / 30 tokens
Log end
``` |
quanthunter/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF | quanthunter | 2024-05-14T09:06:56Z | 5 | 0 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:quantized:NousResearch/Meta-Llama-3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T09:05:51Z | ---
language:
- en
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- llama-cpp
- gguf-my-repo
base_model: NousResearch/Meta-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-8B
results: []
---
# quanthunter/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo quanthunter/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF --model hermes-2-pro-llama-3-8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo quanthunter/Hermes-2-Pro-Llama-3-8B-Q4_K_M-GGUF --model hermes-2-pro-llama-3-8b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hermes-2-pro-llama-3-8b.Q4_K_M.gguf -n 128
```
|
fine-tuned/norwegian-nli-triplets-c | fine-tuned | 2024-05-14T09:06:35Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Keywords",
"Documents",
"Search",
"Information",
"Answers",
"custom_code",
"no",
"dataset:fine-tuned/norwegian-nli-triplets-c",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-14T07:30:08Z | ---
license: apache-2.0
datasets:
- fine-tuned/norwegian-nli-triplets-c
- allenai/c4
language:
- no
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Keywords
- Documents
- Search
- Information
- Answers
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
Keyword-based search engine for documents
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/norwegian-nli-triplets-c',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Litzy619/G0513HMA16H | Litzy619 | 2024-05-14T09:05:54Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T07:51:41Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA16H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA16H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1825 | 0.09 | 10 | 2.8689 |
| 2.5641 | 0.18 | 20 | 2.0695 |
| 1.6393 | 0.27 | 30 | 1.1468 |
| 0.8037 | 0.36 | 40 | 0.3841 |
| 0.2412 | 0.45 | 50 | 0.2008 |
| 0.1664 | 0.54 | 60 | 0.1550 |
| 0.1533 | 0.63 | 70 | 0.1518 |
| 0.1517 | 0.73 | 80 | 0.1515 |
| 0.1433 | 0.82 | 90 | 0.1521 |
| 0.1475 | 0.91 | 100 | 0.1492 |
| 0.1493 | 1.0 | 110 | 0.1503 |
| 0.1457 | 1.09 | 120 | 0.1492 |
| 0.1462 | 1.18 | 130 | 0.1483 |
| 0.1464 | 1.27 | 140 | 0.1473 |
| 0.1488 | 1.36 | 150 | 0.1480 |
| 0.1424 | 1.45 | 160 | 0.1494 |
| 0.1444 | 1.54 | 170 | 0.1461 |
| 0.1461 | 1.63 | 180 | 0.1459 |
| 0.1463 | 1.72 | 190 | 0.1475 |
| 0.144 | 1.81 | 200 | 0.1454 |
| 0.1445 | 1.9 | 210 | 0.1436 |
| 0.1418 | 1.99 | 220 | 0.1384 |
| 0.1376 | 2.08 | 230 | 0.1386 |
| 0.1331 | 2.18 | 240 | 0.1328 |
| 0.1313 | 2.27 | 250 | 0.1339 |
| 0.132 | 2.36 | 260 | 0.1329 |
| 0.1302 | 2.45 | 270 | 0.1329 |
| 0.1268 | 2.54 | 280 | 0.1294 |
| 0.1242 | 2.63 | 290 | 0.1281 |
| 0.1238 | 2.72 | 300 | 0.1270 |
| 0.1249 | 2.81 | 310 | 0.1267 |
| 0.1243 | 2.9 | 320 | 0.1267 |
| 0.1254 | 2.99 | 330 | 0.1267 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Shankhdhar/classifier_woog | Shankhdhar | 2024-05-14T09:05:28Z | 8 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-05-10T09:47:07Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: cookie boxes for gifting under $20
- text: Are there any restrictions on returning candle supplies?
- text: special features for bakery boxes
- text: I need to confirm the shipping date for my recent purchase. Can you help me
with that?
- text: different types of bakery boxes available
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8380952380952381
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| product discoverability | <ul><li>'Do you have Adidas Superstar shoes?'</li><li>'Do you have any running shoes in pink color?'</li><li>'Do you have black Yeezy sneakers in size 9?'</li></ul> |
| order tracking | <ul><li>"I'm concerned about the delay in the delivery of my order. Can you please provide me with the status?"</li><li>'What is the estimated delivery time for orders within the same city?'</li><li>"I placed an order last week and it still hasn't arrived. Can you check the status for me?"</li></ul> |
| product policy | <ul><li>'Are there any exceptions to the return policy for items that were purchased with a student discount?'</li><li>'Do you offer a try-and-buy option for sneakers?'</li><li>'Do you offer a price adjustment for sneakers if the price drops after purchase?'</li></ul> |
| product faq | <ul><li>'Do you have any limited edition sneakers available?'</li><li>'Are the Adidas Yeezy Foam Runner available in size 7?'</li><li>"Are the Nike Air Force 1 sneakers available in women's sizes?"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8381 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("special features for bakery boxes")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 11.6415 | 24 |
| Label | Training Sample Count |
|:------------------------|:----------------------|
| order tracking | 30 |
| product discoverability | 30 |
| product faq | 16 |
| product policy | 30 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0019 | 1 | 0.1782 | - |
| 0.0965 | 50 | 0.0628 | - |
| 0.1931 | 100 | 0.0036 | - |
| 0.2896 | 150 | 0.0013 | - |
| 0.3861 | 200 | 0.0012 | - |
| 0.4826 | 250 | 0.0003 | - |
| 0.5792 | 300 | 0.0002 | - |
| 0.6757 | 350 | 0.0003 | - |
| 0.7722 | 400 | 0.0002 | - |
| 0.8687 | 450 | 0.0005 | - |
| 0.9653 | 500 | 0.0003 | - |
| 1.0618 | 550 | 0.0001 | - |
| 1.1583 | 600 | 0.0002 | - |
| 1.2548 | 650 | 0.0002 | - |
| 1.3514 | 700 | 0.0002 | - |
| 1.4479 | 750 | 0.0001 | - |
| 1.5444 | 800 | 0.0001 | - |
| 1.6409 | 850 | 0.0001 | - |
| 1.7375 | 900 | 0.0002 | - |
| 1.8340 | 950 | 0.0001 | - |
| 1.9305 | 1000 | 0.0001 | - |
### Framework Versions
- Python: 3.9.16
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.3.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
pilsneyrouset/nils4.0 | pilsneyrouset | 2024-05-14T09:03:46Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T09:00:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** pilsneyrouset
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
marsfu2009/XXMeagYY_sd_lora | marsfu2009 | 2024-05-14T09:02:06Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-14T07:39:54Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - marsfu2009/XXMeagYY_sd_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the marsfu2009/MegaSticker dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
GodsonNtungi/Training-Checkpoint | GodsonNtungi | 2024-05-14T09:02:05Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2024-05-13T23:24:10Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Litzy619/G0513HMA3H | Litzy619 | 2024-05-14T09:01:36Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T08:13:46Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA3H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA3H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.161 | 0.09 | 10 | 2.8226 |
| 2.4808 | 0.18 | 20 | 1.9257 |
| 1.4989 | 0.27 | 30 | 0.9749 |
| 0.6093 | 0.36 | 40 | 0.2572 |
| 0.1925 | 0.45 | 50 | 0.1591 |
| 0.1558 | 0.54 | 60 | 0.1523 |
| 0.1517 | 0.63 | 70 | 0.1497 |
| 0.1503 | 0.73 | 80 | 0.1487 |
| 0.1422 | 0.82 | 90 | 0.1499 |
| 0.1459 | 0.91 | 100 | 0.1487 |
| 0.1494 | 1.0 | 110 | 0.1495 |
| 0.1438 | 1.09 | 120 | 0.1499 |
| 0.1458 | 1.18 | 130 | 0.1472 |
| 0.1465 | 1.27 | 140 | 0.1463 |
| 0.1483 | 1.36 | 150 | 0.1464 |
| 0.1426 | 1.45 | 160 | 0.1480 |
| 0.1433 | 1.54 | 170 | 0.1450 |
| 0.1443 | 1.63 | 180 | 0.1440 |
| 0.1455 | 1.72 | 190 | 0.1495 |
| 0.1437 | 1.81 | 200 | 0.1439 |
| 0.1433 | 1.9 | 210 | 0.1398 |
| 0.1408 | 1.99 | 220 | 0.1387 |
| 0.1348 | 2.08 | 230 | 0.1340 |
| 0.1311 | 2.18 | 240 | 0.1334 |
| 0.1303 | 2.27 | 250 | 0.1297 |
| 0.1319 | 2.36 | 260 | 0.1285 |
| 0.1297 | 2.45 | 270 | 0.1291 |
| 0.129 | 2.54 | 280 | 0.1270 |
| 0.1247 | 2.63 | 290 | 0.1252 |
| 0.1251 | 2.72 | 300 | 0.1242 |
| 0.1299 | 2.81 | 310 | 0.1239 |
| 0.1271 | 2.9 | 320 | 0.1240 |
| 0.1269 | 2.99 | 330 | 0.1240 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
mradermacher/AllOverButTheCrying-7B-slerp-GGUF | mradermacher | 2024-05-14T09:00:27Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"jondurbin/bagel-dpo-7b-v0.5",
"Weyaxi/Einstein-v6-7B",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T07:20:02Z | ---
base_model: DreadPoor/AllOverButTheCrying-7B-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- jondurbin/bagel-dpo-7b-v0.5
- Weyaxi/Einstein-v6-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DreadPoor/AllOverButTheCrying-7B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AllOverButTheCrying-7B-slerp-GGUF/resolve/main/AllOverButTheCrying-7B-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
serjtroshin/finetuned_gpt2_nontoxic | serjtroshin | 2024-05-14T08:58:48Z | 146 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:37:15Z | ---
license: apache-2.0
---
|
ID-Animator/ID-Animator | ID-Animator | 2024-05-14T08:58:06Z | 0 | 9 | null | [
"text-to-video",
"arxiv:2404.15275",
"license:apache-2.0",
"region:us"
] | text-to-video | 2024-05-08T08:36:49Z | ---
license: apache-2.0
pipeline_tag: text-to-video
---
# ID-Animator
This repository is the official checkpoint of [ID-Animator](https://id-animator.github.io/).
It is a Zero-shot ID-Preserving Human Video Generation framework. It can generate high-quality ID-specific human video with only one ID image as reference.
**[ID-Animator: Zero-Shot Identity-Preserving Human Video Generation](https://id-animator.github.io/)**
</br>
[Xuanhua He](https://scholar.google.com/citations?user=-bDAN2YAAAAJ&hl=en&oi=ao),
[Quande Liu*](https://liuquande.github.io/),
[Shengju Qian](https://scholar.google.com/citations?user=QNnWmasAAAAJ&hl=zh-CN),
Xin Wang,
Tao Hu,
[Ke Cao](https://scholar.google.com/citations?user=3qMrWmgAAAAJ&hl=en&oi=ao),
Keyu Yan,
Jie Zhang*
(*Corresponding Author)
[](https://arxiv.org/abs/2404.15275)
[](https://id-animator.github.io/)
[](https://huggingface.co/spaces/ID-Animator/ID-Animator)
## Human Video Generation Demos
### Recontextualization
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/lecun1.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/lecun/2.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/lecun/3.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/lecun/4.gif" style="width:100%"></td>
</tr>
</table>
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/fbb.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/ann/1.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/ann/4.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/first_part/ann/6.gif" style="width:100%"></td>
</tr>
</table>
### Inference with Community Models
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/hinton.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/hinton/2.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/hinton/3.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/hinton/6.gif" style="width:100%"></td>
</tr>
</table>
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/taylor.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/taylor/4.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/taylor/5.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/second/taylor/6.gif" style="width:100%"></td>
</tr>
</table>
### Identity Mixing
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image 1</td>
<td width=25% style="border: none; text-align: center">Reference Image 2</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/cl.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/ref/sms.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/third/1/1.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/third/1/6.gif" style="width:100%"></td>
</tr>
</table>
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image 1</td>
<td width=25% style="border: none; text-align: center">Reference Image 2</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/sansa.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/ref/musk.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/third/2/2.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/third/2/6.gif" style="width:100%"></td>
</tr>
</table>
### Combination with ControlNet
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Sketch Image</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/fbb.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/ref/sketch.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/fourth/1.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/fourth/2.gif" style="width:100%"></td>
</tr>
</table>
<table class="center">
<tr style="line-height: 0">
<td width=25% style="border: none; text-align: center">Reference Image</td>
<td width=25% style="border: none; text-align: center">Sketch Sequence</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
<td width=25% style="border: none; text-align: center">Output Video</td>
</tr>
<tr>
<td width=25% style="border: none"><img src="./__assets__/ref/fbb.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/ref/sketch_sequence.png" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/fourth/3.gif" style="width:100%"></td>
<td width=25% style="border: none"><img src="./__assets__/fourth/4.gif" style="width:100%"></td>
</tr>
</table>
## Contact Us
**Xuanhua He**: [email protected]
**Quande Liu**: [email protected]
**Shengju Qian**: [email protected]
|
1024m/EXALT-1A-LLAMA3-5B-16bit | 1024m | 2024-05-14T08:57:27Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:53:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** 1024m
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kyl23/hw3_RTE_lora_1e-3 | kyl23 | 2024-05-14T08:54:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T08:54:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cangurcuoglu/den2 | cangurcuoglu | 2024-05-14T08:48:21Z | 63 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T07:27:59Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: den2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# den2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9271
- Validation Loss: 6.9290
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1620, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.8643 | 6.9290 | 0 |
| 6.9267 | 6.9290 | 1 |
| 6.9271 | 6.9290 | 2 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LnL-AI/Yi-1.5-34B-Chat-4bit-gptq | LnL-AI | 2024-05-14T08:47:43Z | 12 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-14T07:53:47Z | ---
license: unknown
---
### Quantize config:
```json
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.005,
"desc_act": false,
"static_groups": false,
"sym": false,
"true_sequential": true,
"model_name_or_path": "",
"model_file_base_name": "model",
"quant_method": "gptq",
"checkpoint_format": "gptq",
"meta": {
"quantizer": "autogptq:0.8.0.dev1"
}
}
``` |
mradermacher/llama-3-sauce-v2-8B-GGUF | mradermacher | 2024-05-14T08:45:52Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"experimental",
"en",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:flammenai/FlameMix-DPO-v1",
"base_model:nbeerbower/llama-3-sauce-v2-8B",
"base_model:quantized:nbeerbower/llama-3-sauce-v2-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T08:18:08Z | ---
base_model: nbeerbower/llama-3-sauce-v2-8B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- flammenai/FlameMix-DPO-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- experimental
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/llama-3-sauce-v2-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-sauce-v2-8B-GGUF/resolve/main/llama-3-sauce-v2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
quangtqv/bi_encoder_tool_learning_14_5_2024_v7 | quangtqv | 2024-05-14T08:39:05Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-14T08:38:53Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# quangtqv/bi_encoder_tool_learning_14_5_2024_v7
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v7')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v7)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kanaluvu/bigscience-prompted-finetuned | kanaluvu | 2024-05-14T08:36:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T08:35:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
neih4207/checkpoint | neih4207 | 2024-05-14T08:34:48Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:thenlper/gte-large",
"base_model:finetune:thenlper/gte-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T07:06:16Z | ---
license: mit
base_model: thenlper/gte-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint
This model is a fine-tuned version of [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4496 | 1.0 | 11 | 0.8490 | 0.8 |
| 0.6028 | 2.0 | 22 | 0.3205 | 0.8909 |
| 0.183 | 3.0 | 33 | 0.2986 | 0.9273 |
| 0.0749 | 4.0 | 44 | 0.2600 | 0.9455 |
| 0.039 | 5.0 | 55 | 0.1932 | 0.9636 |
| 0.0208 | 6.0 | 66 | 0.1570 | 0.9636 |
| 0.0147 | 7.0 | 77 | 0.2016 | 0.9545 |
| 0.0119 | 8.0 | 88 | 0.1818 | 0.9545 |
| 0.0059 | 9.0 | 99 | 0.1700 | 0.9545 |
| 0.0048 | 10.0 | 110 | 0.1728 | 0.9545 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Lynxexe/RitoTranslator_V1 | Lynxexe | 2024-05-14T08:31:15Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T08:16:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/G0513HMA24H | Litzy619 | 2024-05-14T08:29:00Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T07:00:13Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA24H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA24H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1589 | 0.09 | 10 | 2.7634 |
| 2.422 | 0.18 | 20 | 1.8450 |
| 1.3644 | 0.27 | 30 | 0.8162 |
| 0.4569 | 0.36 | 40 | 0.1885 |
| 0.17 | 0.45 | 50 | 0.1622 |
| 0.1549 | 0.54 | 60 | 0.1520 |
| 0.1502 | 0.63 | 70 | 0.1508 |
| 0.1525 | 0.73 | 80 | 0.1485 |
| 0.1547 | 0.82 | 90 | 0.1488 |
| 0.1467 | 0.91 | 100 | 0.1482 |
| 0.1483 | 1.0 | 110 | 0.1482 |
| 0.1434 | 1.09 | 120 | 0.1475 |
| 0.1437 | 1.18 | 130 | 0.1492 |
| 0.1427 | 1.27 | 140 | 0.1385 |
| 0.1412 | 1.36 | 150 | 0.1381 |
| 0.1351 | 1.45 | 160 | 0.1341 |
| 0.1334 | 1.54 | 170 | 0.1312 |
| 0.1321 | 1.63 | 180 | 0.1271 |
| 0.1329 | 1.72 | 190 | 0.1333 |
| 0.1298 | 1.81 | 200 | 0.1244 |
| 0.1278 | 1.9 | 210 | 0.1251 |
| 0.1277 | 1.99 | 220 | 0.1219 |
| 0.1163 | 2.08 | 230 | 0.1188 |
| 0.1154 | 2.18 | 240 | 0.1200 |
| 0.1136 | 2.27 | 250 | 0.1185 |
| 0.117 | 2.36 | 260 | 0.1167 |
| 0.1147 | 2.45 | 270 | 0.1159 |
| 0.1084 | 2.54 | 280 | 0.1149 |
| 0.1077 | 2.63 | 290 | 0.1130 |
| 0.1098 | 2.72 | 300 | 0.1119 |
| 0.1125 | 2.81 | 310 | 0.1115 |
| 0.1124 | 2.9 | 320 | 0.1114 |
| 0.1119 | 2.99 | 330 | 0.1114 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
pdx97/rl_course_vizdoom_health_gathering_supreme | pdx97 | 2024-05-14T08:25:10Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T08:25:00Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.17 +/- 3.73
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pdx97/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/CeramicMaiden-7B-Slerp-GGUF | mradermacher | 2024-05-14T08:25:00Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Unobtainium-7B-task_arithmetic",
"DreadPoor/GoldenMaiden-7B-model_stock",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T07:16:53Z | ---
base_model: DreadPoor/CeramicMaiden-7B-Slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/Unobtainium-7B-task_arithmetic
- DreadPoor/GoldenMaiden-7B-model_stock
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DreadPoor/CeramicMaiden-7B-Slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CeramicMaiden-7B-Slerp-GGUF/resolve/main/CeramicMaiden-7B-Slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
astroficboy/testing | astroficboy | 2024-05-14T08:23:31Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"dataset:PleIAs/YouTube-Commons",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:18:13Z | ---
license: apache-2.0
datasets:
- PleIAs/YouTube-Commons
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
--- |
krupakar-reddy/DSA_base_model | krupakar-reddy | 2024-05-14T08:19:09Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T07:04:38Z | ---
license: apache-2.0
---
|
abc88767/4sc51 | abc88767 | 2024-05-14T08:18:48Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:17:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AhmetAytar/all-mpnet-base-v2-fine-tuned_17_textbook_grobid | AhmetAytar | 2024-05-14T08:18:09Z | 11 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-14T08:14:04Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# AhmetAytar/all-mpnet-base-v2-fine-tuned_17_textbook_grobid
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('AhmetAytar/all-mpnet-base-v2-fine-tuned_17_textbook_grobid')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=AhmetAytar/all-mpnet-base-v2-fine-tuned_17_textbook_grobid)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 446 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 89,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
quanthunter/Hermes-2-Pro-Llama-3-8B-Q6_K-GGUF | quanthunter | 2024-05-14T08:17:04Z | 2 | 0 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:quantized:NousResearch/Meta-Llama-3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T08:15:34Z | ---
language:
- en
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- llama-cpp
- gguf-my-repo
base_model: NousResearch/Meta-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-8B
results: []
---
# quanthunter/Hermes-2-Pro-Llama-3-8B-Q6_K-GGUF
This model was converted to GGUF format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo quanthunter/Hermes-2-Pro-Llama-3-8B-Q6_K-GGUF --model hermes-2-pro-llama-3-8b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo quanthunter/Hermes-2-Pro-Llama-3-8B-Q6_K-GGUF --model hermes-2-pro-llama-3-8b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hermes-2-pro-llama-3-8b.Q6_K.gguf -n 128
```
|
Minaaaa/electra_small_qa | Minaaaa | 2024-05-14T08:13:58Z | 167 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-14T08:13:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Andrei481/llama-3-8b-unsloth-corpus-open-instruct-ro-16b | Andrei481 | 2024-05-14T08:13:32Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:Andrei481/llama3-8b-corpus-ro-8k-16b",
"base_model:finetune:Andrei481/llama3-8b-corpus-ro-8k-16b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T08:06:44Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Andrei481/llama3-8b-corpus-ro-8k-16b
---
# Uploaded model
- **Developed by:** Andrei481
- **License:** apache-2.0
- **Finetuned from model :** Andrei481/llama3-8b-corpus-ro-8k-16b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Litzy619/Phi30513MA | Litzy619 | 2024-05-14T08:12:58Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-14T06:01:14Z | ---
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- generated_from_trainer
model-index:
- name: Phi30513MA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi30513MA
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.162 | 0.09 | 10 | 2.1516 |
| 1.0891 | 0.18 | 20 | 0.3958 |
| 0.2412 | 0.27 | 30 | 0.1475 |
| 0.1456 | 0.36 | 40 | 0.1307 |
| 0.127 | 0.45 | 50 | 0.1272 |
| 0.1169 | 0.54 | 60 | 0.0964 |
| 0.0967 | 0.63 | 70 | 0.0978 |
| 0.0887 | 0.73 | 80 | 0.0936 |
| 0.0807 | 0.82 | 90 | 0.0875 |
| 0.0837 | 0.91 | 100 | 0.0734 |
| 0.0758 | 1.0 | 110 | 0.0739 |
| 0.0614 | 1.09 | 120 | 0.0710 |
| 0.0552 | 1.18 | 130 | 0.0801 |
| 0.0579 | 1.27 | 140 | 0.0727 |
| 0.0561 | 1.36 | 150 | 0.0691 |
| 0.0616 | 1.45 | 160 | 0.0688 |
| 0.0566 | 1.54 | 170 | 0.0676 |
| 0.0519 | 1.63 | 180 | 0.0681 |
| 0.0514 | 1.72 | 190 | 0.0678 |
| 0.0602 | 1.81 | 200 | 0.0634 |
| 0.0466 | 1.9 | 210 | 0.0660 |
| 0.0481 | 1.99 | 220 | 0.0692 |
| 0.0325 | 2.08 | 230 | 0.0737 |
| 0.0358 | 2.18 | 240 | 0.0797 |
| 0.0265 | 2.27 | 250 | 0.0851 |
| 0.0299 | 2.36 | 260 | 0.0870 |
| 0.0337 | 2.45 | 270 | 0.0826 |
| 0.0292 | 2.54 | 280 | 0.0812 |
| 0.0303 | 2.63 | 290 | 0.0813 |
| 0.0356 | 2.72 | 300 | 0.0799 |
| 0.0358 | 2.81 | 310 | 0.0795 |
| 0.0387 | 2.9 | 320 | 0.0792 |
| 0.0313 | 2.99 | 330 | 0.0792 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Vedx04/Meta-Llama-3-8B-Instruct_explanation | Vedx04 | 2024-05-14T08:11:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T08:11:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/G0513HMA23H | Litzy619 | 2024-05-14T08:11:52Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T06:56:21Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA23H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA23H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1683 | 0.09 | 10 | 2.8078 |
| 2.4559 | 0.18 | 20 | 1.8951 |
| 1.4297 | 0.27 | 30 | 0.8930 |
| 0.5326 | 0.36 | 40 | 0.2172 |
| 0.183 | 0.45 | 50 | 0.1592 |
| 0.1534 | 0.54 | 60 | 0.1511 |
| 0.1503 | 0.63 | 70 | 0.1499 |
| 0.152 | 0.73 | 80 | 0.1491 |
| 0.1456 | 0.82 | 90 | 0.1486 |
| 0.1454 | 0.91 | 100 | 0.1488 |
| 0.1488 | 1.0 | 110 | 0.1487 |
| 0.1438 | 1.09 | 120 | 0.1488 |
| 0.1452 | 1.18 | 130 | 0.1469 |
| 0.1458 | 1.27 | 140 | 0.1464 |
| 0.1469 | 1.36 | 150 | 0.1460 |
| 0.1417 | 1.45 | 160 | 0.1469 |
| 0.1427 | 1.54 | 170 | 0.1461 |
| 0.1442 | 1.63 | 180 | 0.1428 |
| 0.1446 | 1.72 | 190 | 0.1451 |
| 0.1416 | 1.81 | 200 | 0.1389 |
| 0.1378 | 1.9 | 210 | 0.1361 |
| 0.135 | 1.99 | 220 | 0.1304 |
| 0.129 | 2.08 | 230 | 0.1272 |
| 0.1251 | 2.18 | 240 | 0.1241 |
| 0.1213 | 2.27 | 250 | 0.1241 |
| 0.1287 | 2.36 | 260 | 0.1219 |
| 0.1251 | 2.45 | 270 | 0.1222 |
| 0.1206 | 2.54 | 280 | 0.1200 |
| 0.1168 | 2.63 | 290 | 0.1182 |
| 0.1183 | 2.72 | 300 | 0.1179 |
| 0.1199 | 2.81 | 310 | 0.1175 |
| 0.1216 | 2.9 | 320 | 0.1173 |
| 0.1206 | 2.99 | 330 | 0.1173 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
dongqingcc/llama3_smart_home | dongqingcc | 2024-05-14T08:05:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T07:48:49Z | ---
license: apache-2.0
---
|
ZaneHorrible/google-vit-base-patch16-384-batch_16_epoch_4_classes_24 | ZaneHorrible | 2024-05-14T08:02:07Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-384",
"base_model:finetune:google/vit-base-patch16-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T05:11:55Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-384
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: google-vit-base-patch16-384-batch_16_epoch_4_classes_24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: bengali_food_images
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9899425287356322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-vit-base-patch16-384-batch_16_epoch_4_classes_24
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on the bengali_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0635
- Accuracy: 0.9899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2947 | 0.07 | 100 | 0.2491 | 0.9353 |
| 0.1703 | 0.14 | 200 | 0.2377 | 0.9339 |
| 0.0797 | 0.21 | 300 | 0.1413 | 0.9641 |
| 0.1035 | 0.28 | 400 | 0.1057 | 0.9641 |
| 0.0532 | 0.35 | 500 | 0.1711 | 0.9483 |
| 0.1004 | 0.42 | 600 | 0.1746 | 0.9526 |
| 0.0962 | 0.49 | 700 | 0.1598 | 0.9555 |
| 0.1579 | 0.56 | 800 | 0.1741 | 0.9440 |
| 0.0532 | 0.63 | 900 | 0.0974 | 0.9670 |
| 0.1594 | 0.7 | 1000 | 0.2842 | 0.9181 |
| 0.0488 | 0.77 | 1100 | 0.2928 | 0.9224 |
| 0.1122 | 0.84 | 1200 | 0.3095 | 0.9138 |
| 0.1252 | 0.91 | 1300 | 0.1411 | 0.9569 |
| 0.0517 | 0.97 | 1400 | 0.1378 | 0.9684 |
| 0.047 | 1.04 | 1500 | 0.2595 | 0.9483 |
| 0.0478 | 1.11 | 1600 | 0.1425 | 0.9583 |
| 0.0107 | 1.18 | 1700 | 0.1135 | 0.9684 |
| 0.0021 | 1.25 | 1800 | 0.1428 | 0.9598 |
| 0.036 | 1.32 | 1900 | 0.1851 | 0.9583 |
| 0.0733 | 1.39 | 2000 | 0.1801 | 0.9583 |
| 0.0549 | 1.46 | 2100 | 0.1917 | 0.9598 |
| 0.0442 | 1.53 | 2200 | 0.1538 | 0.9655 |
| 0.0196 | 1.6 | 2300 | 0.1411 | 0.9698 |
| 0.0809 | 1.67 | 2400 | 0.1862 | 0.9540 |
| 0.0004 | 1.74 | 2500 | 0.1325 | 0.9698 |
| 0.0404 | 1.81 | 2600 | 0.1246 | 0.9713 |
| 0.0691 | 1.88 | 2700 | 0.1961 | 0.9598 |
| 0.0088 | 1.95 | 2800 | 0.1841 | 0.9684 |
| 0.0029 | 2.02 | 2900 | 0.1057 | 0.9813 |
| 0.0005 | 2.09 | 3000 | 0.1131 | 0.9741 |
| 0.0001 | 2.16 | 3100 | 0.0892 | 0.9813 |
| 0.0002 | 2.23 | 3200 | 0.0757 | 0.9828 |
| 0.0186 | 2.3 | 3300 | 0.0794 | 0.9784 |
| 0.0127 | 2.37 | 3400 | 0.1100 | 0.9770 |
| 0.0048 | 2.44 | 3500 | 0.1386 | 0.9799 |
| 0.0048 | 2.51 | 3600 | 0.0635 | 0.9899 |
| 0.001 | 2.58 | 3700 | 0.0997 | 0.9799 |
| 0.0005 | 2.65 | 3800 | 0.1119 | 0.9756 |
| 0.0006 | 2.72 | 3900 | 0.1292 | 0.9713 |
| 0.0003 | 2.79 | 4000 | 0.1186 | 0.9770 |
| 0.0137 | 2.86 | 4100 | 0.0969 | 0.9770 |
| 0.0001 | 2.92 | 4200 | 0.0738 | 0.9842 |
| 0.0001 | 2.99 | 4300 | 0.1236 | 0.9828 |
| 0.0001 | 3.06 | 4400 | 0.0932 | 0.9856 |
| 0.0001 | 3.13 | 4500 | 0.0992 | 0.9799 |
| 0.0001 | 3.2 | 4600 | 0.0960 | 0.9828 |
| 0.0001 | 3.27 | 4700 | 0.1123 | 0.9799 |
| 0.0001 | 3.34 | 4800 | 0.1107 | 0.9813 |
| 0.0029 | 3.41 | 4900 | 0.1041 | 0.9842 |
| 0.0001 | 3.48 | 5000 | 0.1074 | 0.9828 |
| 0.0001 | 3.55 | 5100 | 0.1111 | 0.9799 |
| 0.0001 | 3.62 | 5200 | 0.1088 | 0.9784 |
| 0.0001 | 3.69 | 5300 | 0.0936 | 0.9813 |
| 0.0001 | 3.76 | 5400 | 0.0915 | 0.9799 |
| 0.0001 | 3.83 | 5500 | 0.0897 | 0.9799 |
| 0.0001 | 3.9 | 5600 | 0.0875 | 0.9770 |
| 0.0 | 3.97 | 5700 | 0.0856 | 0.9784 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DUAL-GPO/zephyr-7b-gpo-v4-i3 | DUAL-GPO | 2024-05-14T07:54:49Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO-2/zephyr-7b-irepo-new-i2",
"base_model:adapter:DUAL-GPO-2/zephyr-7b-irepo-new-i2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T04:51:20Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO-2/zephyr-7b-irepo-new-i2
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-gpo-v4-i3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v4-i3
This model is a fine-tuned version of [DUAL-GPO-2/zephyr-7b-irepo-new-i2](https://huggingface.co/DUAL-GPO-2/zephyr-7b-irepo-new-i2) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
khoantap/rabbit-fish-8b | khoantap | 2024-05-14T07:52:59Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T07:41:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TaroVN/NeoX-cost-0512-v5 | TaroVN | 2024-05-14T07:51:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T07:51:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/G0513HMA14H | Litzy619 | 2024-05-14T07:50:39Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T06:35:16Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0513HMA14H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0513HMA14H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1825 | 0.09 | 10 | 2.8689 |
| 2.5641 | 0.18 | 20 | 2.0695 |
| 1.6393 | 0.27 | 30 | 1.1468 |
| 0.8037 | 0.36 | 40 | 0.3841 |
| 0.2412 | 0.45 | 50 | 0.2008 |
| 0.1664 | 0.54 | 60 | 0.1550 |
| 0.1533 | 0.63 | 70 | 0.1518 |
| 0.1517 | 0.73 | 80 | 0.1515 |
| 0.1433 | 0.82 | 90 | 0.1521 |
| 0.1475 | 0.91 | 100 | 0.1492 |
| 0.1493 | 1.0 | 110 | 0.1503 |
| 0.1457 | 1.09 | 120 | 0.1492 |
| 0.1462 | 1.18 | 130 | 0.1483 |
| 0.1464 | 1.27 | 140 | 0.1473 |
| 0.1488 | 1.36 | 150 | 0.1480 |
| 0.1424 | 1.45 | 160 | 0.1494 |
| 0.1444 | 1.54 | 170 | 0.1461 |
| 0.1461 | 1.63 | 180 | 0.1459 |
| 0.1463 | 1.72 | 190 | 0.1475 |
| 0.144 | 1.81 | 200 | 0.1454 |
| 0.1445 | 1.9 | 210 | 0.1436 |
| 0.1418 | 1.99 | 220 | 0.1384 |
| 0.1376 | 2.08 | 230 | 0.1386 |
| 0.1331 | 2.18 | 240 | 0.1328 |
| 0.1313 | 2.27 | 250 | 0.1339 |
| 0.132 | 2.36 | 260 | 0.1329 |
| 0.1302 | 2.45 | 270 | 0.1329 |
| 0.1268 | 2.54 | 280 | 0.1294 |
| 0.1242 | 2.63 | 290 | 0.1281 |
| 0.1238 | 2.72 | 300 | 0.1270 |
| 0.1249 | 2.81 | 310 | 0.1267 |
| 0.1243 | 2.9 | 320 | 0.1267 |
| 0.1254 | 2.99 | 330 | 0.1267 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
LnL-AI/Yi-1.5-34B-4bit-gptq | LnL-AI | 2024-05-14T07:49:35Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-13T16:24:28Z | ---
license: unknown
---
### Quantizing Config:
```json
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.005,
"desc_act": false,
"static_groups": false,
"sym": false,
"true_sequential": true,
"model_name_or_path": "",
"model_file_base_name": "model",
"quant_method": "gptq",
"checkpoint_format": "gptq",
"meta": {
"quantizer": "autogptq:0.8.0.dev1"
}
}
``` |
LnL-AI/Yi-1.5-9B-Chat-4bit-gptq-autoround | LnL-AI | 2024-05-14T07:48:43Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-14T07:18:53Z | ---
license: unknown
---
### Quantize config:
```json
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": false,
"static_groups": false,
"sym": true,
"true_sequential": false,
"model_name_or_path": null,
"model_file_base_name": "model",
"quant_method": "gptq",
"checkpoint_format": "gptq",
"meta": {
"quantizer": "intel/auto-round:0.2.0.dev",
"packer": "autogptq:0.8.0.dev1",
"iters": 1000,
"lr": 0.001,
"minmax_lr": 0.001,
"enable_minmax_tuning": false,
"enable_quanted_input": true,
"scale_dtype": "float16"
}
}
``` |
yifanxie/angry-pelican-1 | yifanxie | 2024-05-14T07:44:11Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T07:42:16Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/angry-pelican-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/angry-pelican-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/angry-pelican-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/angry-pelican-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
arthurdeblaere/distilgpt2-finetuned-prompts | arthurdeblaere | 2024-05-14T07:42:58Z | 144 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T07:42:46Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-prompts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-prompts
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.7537 |
| No log | 2.0 | 250 | 3.5373 |
| No log | 3.0 | 375 | 3.4970 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
yifanxie/angry-pelican | yifanxie | 2024-05-14T07:41:45Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-14T07:39:51Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/angry-pelican",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/angry-pelican",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/angry-pelican",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/angry-pelican" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Dharma20/code-search-net-tokenizer | Dharma20 | 2024-05-14T07:38:07Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T07:38:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
auravstomar7/bert-base-uncased-pronoun-coreference-ner | auravstomar7 | 2024-05-14T07:37:31Z | 65 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-14T07:25:57Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: auravaces/bert-base-uncased-pronoun-coreference
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# auravaces/bert-base-uncased-pronoun-coreference
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0479
- Validation Loss: 0.0955
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0896 | 0.0910 | 0 |
| 0.0634 | 0.0898 | 1 |
| 0.0479 | 0.0955 | 2 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
taimoor-ahmed1/finetuning-sentiment-model-3000-samples | taimoor-ahmed1 | 2024-05-14T07:33:03Z | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T20:35:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9888
- Accuracy: 0.7872
- F1: 0.7872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mizworski/text_classifier | mizworski | 2024-05-14T07:29:59Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-12T15:46:55Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: mizworski/text_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mizworski/text_classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3163
- Validation Loss: 1.2206
- Train Accuracy: 0.4496
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3163 | 1.2206 | 0.4496 | 0 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
crrodrvi/t5-neutralization | crrodrvi | 2024-05-14T07:27:38Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T07:16:53Z | ---
license: apache-2.0
base_model: t5-base
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-neutralization
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8334
- Bleu: 1.8666
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 16 | 1.8974 | 1.8455 | 19.0 |
| No log | 2.0 | 32 | 1.8334 | 1.8666 | 19.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Synaptica-GGUF | mradermacher | 2024-05-14T07:24:34Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Synaptica",
"base_model:quantized:mergekit-community/Synaptica",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T06:46:46Z | ---
base_model: mergekit-community/Synaptica
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mergekit-community/Synaptica
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_XS.gguf) | IQ3_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_M.gguf) | IQ3_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CVR123/Tamil-BERT-finetune-Tamil-questions | CVR123 | 2024-05-14T07:21:41Z | 112 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/tamil-bert",
"base_model:finetune:l3cube-pune/tamil-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T06:55:53Z | ---
license: cc-by-4.0
base_model: l3cube-pune/tamil-bert
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: Tamil-BERT-finetune-Tamil-questions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tamil-BERT-finetune-Tamil-questions
This model is a fine-tuned version of [l3cube-pune/tamil-bert](https://huggingface.co/l3cube-pune/tamil-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3564
- Precision: 0.9226
- Recall: 0.9218
- Accuracy: 0.9218
- F1-score: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 1.534 | 1.0 | 305 | 1.2125 | 0.8686 | 0.8778 | 0.8778 | 0.8701 |
| 0.937 | 2.0 | 610 | 0.7374 | 0.8869 | 0.8958 | 0.8958 | 0.8899 |
| 0.5335 | 3.0 | 915 | 0.4742 | 0.8959 | 0.9078 | 0.9078 | 0.9007 |
| 0.3097 | 4.0 | 1220 | 0.3972 | 0.9004 | 0.9138 | 0.9138 | 0.9064 |
| 0.2083 | 5.0 | 1525 | 0.3869 | 0.9103 | 0.9058 | 0.9058 | 0.9018 |
| 0.1535 | 6.0 | 1830 | 0.4181 | 0.9115 | 0.9078 | 0.9078 | 0.9087 |
| 0.1222 | 7.0 | 2135 | 0.3576 | 0.9243 | 0.9238 | 0.9238 | 0.9240 |
| 0.1002 | 8.0 | 2440 | 0.3564 | 0.9226 | 0.9218 | 0.9218 | 0.9220 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Tyhcbs/taxi_try | Tyhcbs | 2024-05-14T07:20:21Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T07:20:19Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_try
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Tyhcbs/taxi_try", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/Yi-1.5-34B-GGUF | mradermacher | 2024-05-14T07:10:57Z | 55 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:01-ai/Yi-1.5-34B",
"base_model:quantized:01-ai/Yi-1.5-34B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T22:00:47Z | ---
base_model: 01-ai/Yi-1.5-34B
language:
- en
library_name: transformers
license: apache-2.0
no_imatrix: nan
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/01-ai/Yi-1.5-34B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-GGUF/resolve/main/Yi-1.5-34B.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Tyhcbs/q-FrozenLake-v1-4x4-noSlippery | Tyhcbs | 2024-05-14T07:08:41Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T07:08:39Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Tyhcbs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MuntasirHossain/Llama-3-8B-OpenOrca-peft-adapter | MuntasirHossain | 2024-05-14T07:06:42Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2024-05-14T07:03:12Z | ---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: Llama-3-8B-OpenOrca-peft-adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3-8B-OpenOrca-peft-adapter
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0865 | 1.0 | 1425 | 1.3975 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
justinsiow/schema_filter | justinsiow | 2024-05-14T07:00:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T05:40:17Z | ---
license: apache-2.0
---
|
xzybit/summarize_model | xzybit | 2024-05-14T06:54:01Z | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T04:51:45Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarize_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarize_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5779
- Rouge1: 0.13
- Rouge2: 0.0417
- Rougel: 0.1089
- Rougelsum: 0.1088
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 124 | 2.6652 | 0.1276 | 0.038 | 0.1055 | 0.1054 | 19.0 |
| No log | 2.0 | 248 | 2.5779 | 0.13 | 0.0417 | 0.1089 | 0.1088 | 19.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mulanai/mulan-lang-adapter | mulanai | 2024-05-14T06:50:15Z | 0 | 8 | diffusers | [
"diffusers",
"region:us"
] | null | 2024-05-11T02:59:00Z | ---
library_name: diffusers
---
# MuLan Language Adapter
What is it ?
> We present MuLan, a versatile framework to equip any diffusion model with multilingual generation abilities natively by up to 110+ languages around the world. With properly trained text encoder from noisy data, we demonstrate that MuLan could be trained on English only data and support other languages zero-shot. Additionally, we introduce Language Adapter. A language adapter with less than 20M parameters, trained against a frozen denoiser and a text encoder, can be readily combined with any homologous community models/tools, such as LoRA, LCM, ControlNet, and IP-Adapter, without any finetuning.
https://github.com/mulanai/MuLan
Examples:
```diff
# pip install mulankit
from diffusers import StableDiffusionPipeline
+ import mulankit
pipe = StableDiffusionPipeline.from_pretrained('Lykon/dreamshaper-8')
+ pipe = mulankit.transform(pipe, 'mulanai/mulan-lang-adapter::sd15_aesthetic.pth')
image = pipe('一只蓝色的🐶 in the 바다').images[0]
``` |
abc88767/5c50 | abc88767 | 2024-05-14T06:46:45Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T05:45:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZaneHorrible/google-vit-base-patch16-224-in21k-batch_16_epoch_4_classes_24 | ZaneHorrible | 2024-05-14T06:45:30Z | 220 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T05:18:57Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: google-vit-base-patch16-224-in21k-batch_16_epoch_4_classes_24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9683908045977011
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-vit-base-patch16-224-in21k-batch_16_epoch_4_classes_24
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1385
- Accuracy: 0.9684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7768 | 0.07 | 100 | 0.7113 | 0.9282 |
| 0.3925 | 0.14 | 200 | 0.4597 | 0.8908 |
| 0.2437 | 0.21 | 300 | 0.3130 | 0.9282 |
| 0.2105 | 0.28 | 400 | 0.3497 | 0.9023 |
| 0.1744 | 0.35 | 500 | 0.3150 | 0.9124 |
| 0.167 | 0.42 | 600 | 0.2949 | 0.9239 |
| 0.1176 | 0.49 | 700 | 0.3204 | 0.9195 |
| 0.077 | 0.56 | 800 | 0.3104 | 0.9253 |
| 0.1113 | 0.63 | 900 | 0.1950 | 0.9511 |
| 0.1172 | 0.7 | 1000 | 0.2692 | 0.9239 |
| 0.0971 | 0.77 | 1100 | 0.3097 | 0.9267 |
| 0.1143 | 0.84 | 1200 | 0.2072 | 0.9454 |
| 0.1545 | 0.91 | 1300 | 0.2859 | 0.9253 |
| 0.0794 | 0.97 | 1400 | 0.2893 | 0.9224 |
| 0.0951 | 1.04 | 1500 | 0.2094 | 0.9483 |
| 0.0657 | 1.11 | 1600 | 0.2714 | 0.9353 |
| 0.0068 | 1.18 | 1700 | 0.2305 | 0.9425 |
| 0.0511 | 1.25 | 1800 | 0.1682 | 0.9555 |
| 0.0629 | 1.32 | 1900 | 0.2328 | 0.9454 |
| 0.0373 | 1.39 | 2000 | 0.3263 | 0.9310 |
| 0.0885 | 1.46 | 2100 | 0.2341 | 0.9454 |
| 0.0433 | 1.53 | 2200 | 0.2670 | 0.9397 |
| 0.0046 | 1.6 | 2300 | 0.2308 | 0.9468 |
| 0.0054 | 1.67 | 2400 | 0.3182 | 0.9296 |
| 0.0952 | 1.74 | 2500 | 0.2297 | 0.9411 |
| 0.1361 | 1.81 | 2600 | 0.2058 | 0.9454 |
| 0.1124 | 1.88 | 2700 | 0.1656 | 0.9598 |
| 0.0339 | 1.95 | 2800 | 0.1933 | 0.9526 |
| 0.0021 | 2.02 | 2900 | 0.1475 | 0.9569 |
| 0.0248 | 2.09 | 3000 | 0.1806 | 0.9583 |
| 0.0013 | 2.16 | 3100 | 0.1899 | 0.9526 |
| 0.0035 | 2.23 | 3200 | 0.1391 | 0.9641 |
| 0.0358 | 2.3 | 3300 | 0.1593 | 0.9684 |
| 0.0026 | 2.37 | 3400 | 0.1927 | 0.9612 |
| 0.001 | 2.44 | 3500 | 0.1756 | 0.9583 |
| 0.0113 | 2.51 | 3600 | 0.1512 | 0.9713 |
| 0.0009 | 2.58 | 3700 | 0.1540 | 0.9698 |
| 0.0498 | 2.65 | 3800 | 0.1498 | 0.9641 |
| 0.0084 | 2.72 | 3900 | 0.1435 | 0.9655 |
| 0.001 | 2.79 | 4000 | 0.1199 | 0.9713 |
| 0.0011 | 2.86 | 4100 | 0.1301 | 0.9655 |
| 0.003 | 2.92 | 4200 | 0.1350 | 0.9727 |
| 0.0025 | 2.99 | 4300 | 0.1764 | 0.9583 |
| 0.0006 | 3.06 | 4400 | 0.1564 | 0.9713 |
| 0.0006 | 3.13 | 4500 | 0.1524 | 0.9713 |
| 0.0006 | 3.2 | 4600 | 0.1515 | 0.9727 |
| 0.0006 | 3.27 | 4700 | 0.1633 | 0.9741 |
| 0.0005 | 3.34 | 4800 | 0.1404 | 0.9713 |
| 0.0005 | 3.41 | 4900 | 0.1586 | 0.9684 |
| 0.0005 | 3.48 | 5000 | 0.1576 | 0.9655 |
| 0.0005 | 3.55 | 5100 | 0.1505 | 0.9684 |
| 0.0153 | 3.62 | 5200 | 0.1369 | 0.9684 |
| 0.0005 | 3.69 | 5300 | 0.1579 | 0.9670 |
| 0.0005 | 3.76 | 5400 | 0.1451 | 0.9698 |
| 0.0005 | 3.83 | 5500 | 0.1417 | 0.9698 |
| 0.0005 | 3.9 | 5600 | 0.1380 | 0.9698 |
| 0.0004 | 3.97 | 5700 | 0.1385 | 0.9684 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
briannlongzhao/textual_inversion | briannlongzhao | 2024-05-14T06:42:39Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-29T10:22:40Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
second-state/Llama-3-8B-Japanese-Instruct-GGUF | second-state | 2024-05-14T06:42:38Z | 60 | 3 | null | [
"gguf",
"text-generation",
"en",
"ja",
"base_model:haqishen/Llama-3-8B-Japanese-Instruct",
"base_model:quantized:haqishen/Llama-3-8B-Japanese-Instruct",
"license:other",
"region:us",
"conversational"
] | text-generation | 2024-05-14T05:37:53Z | ---
license: other
license_name: llama3
base_model: haqishen/Llama-3-8B-Japanese-Instruct
inference: false
model_creator: haqishen
model_type: llama
pipeline_tag: text-generation
quantized_by: Second State Inc.
language:
- en
- ja
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3-8B-Japanese-Instruct-GGUF
## Original Model
[haqishen/Llama-3-8B-Japanese-Instruct](https://huggingface.co/haqishen/Llama-3-8B-Japanese-Instruct)
## Run with LlamaEdge
- LlamaEdge version: [v0.10.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.10.1) and above
- Prompt template
- Prompt type: `llama-3-chat`
- Prompt string
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template llama-3-chat \
--ctx-size 4096 \
--model-name Llama-3-8B-Japanese-Instruct \
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template llama-3-chat \
--ctx-size 4096
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Llama-3-8B-Japanese-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss |
| [Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss |
| [Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss |
| [Llama-3-8B-Japanese-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended |
| [Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss |
| [Llama-3-8B-Japanese-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended |
| [Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
| [Llama-3-8B-Japanese-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
| [Llama-3-8B-Japanese-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
| [Llama-3-8B-Japanese-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3-8B-Japanese-Instruct-GGUF/blob/main/Llama-3-8B-Japanese-Instruct-f16.gguf) | f16 | 16 | 16.1 GB| |
*Quantized with llama.cpp b2824.*
|
LordY54/recophi3_4bit | LordY54 | 2024-05-14T06:42:37Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T06:41:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** LordY54
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pertical/ppo-Huggy | Pertical | 2024-05-14T06:41:54Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-14T06:41:49Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Pertical/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Mexa7b-GGUF | mradermacher | 2024-05-14T06:41:44Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T06:14:28Z | ---
base_model: SiguienteGlobal/Mexa7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/SiguienteGlobal/Mexa7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mexa7b-GGUF/resolve/main/Mexa7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Minerva-MoE-3x3B-GGUF | mradermacher | 2024-05-14T06:40:33Z | 203 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"sapienzanlp/Minerva-3B-base-v1.0",
"DeepMount00/Minerva-3B-base-RAG",
"FairMind/Minerva-3B-Instruct-v1.0",
"en",
"base_model:ludocomito/Minerva-MoE-3x3B",
"base_model:quantized:ludocomito/Minerva-MoE-3x3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T06:14:15Z | ---
base_model: ludocomito/Minerva-MoE-3x3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- sapienzanlp/Minerva-3B-base-v1.0
- DeepMount00/Minerva-3B-base-RAG
- FairMind/Minerva-3B-Instruct-v1.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ludocomito/Minerva-MoE-3x3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.IQ3_M.gguf) | IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-3x3B-GGUF/resolve/main/Minerva-MoE-3x3B.f16.gguf) | f16 | 14.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Praveenna/sd-class-butterflies-32 | Praveenna | 2024-05-14T06:37:45Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-05-14T06:37:31Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Praveenna/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
tsavage68/Transaminitis_L3_1000rate_1e8_SFT | tsavage68 | 2024-05-14T06:35:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T06:31:32Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Transaminitis_L3_1000rate_1e8_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transaminitis_L3_1000rate_1e8_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.684 | 0.2 | 25 | 2.6901 |
| 2.6773 | 0.4 | 50 | 2.6883 |
| 2.6627 | 0.6 | 75 | 2.6887 |
| 2.6575 | 0.8 | 100 | 2.6912 |
| 2.6624 | 1.0 | 125 | 2.6897 |
| 2.6725 | 1.2 | 150 | 2.6884 |
| 2.6661 | 1.4 | 175 | 2.6891 |
| 2.692 | 1.6 | 200 | 2.6879 |
| 2.6801 | 1.8 | 225 | 2.6855 |
| 2.6683 | 2.0 | 250 | 2.6867 |
| 2.6812 | 2.2 | 275 | 2.6857 |
| 2.6786 | 2.4 | 300 | 2.6862 |
| 2.6726 | 2.6 | 325 | 2.6863 |
| 2.6733 | 2.8 | 350 | 2.6870 |
| 2.664 | 3.0 | 375 | 2.6880 |
| 2.665 | 3.2 | 400 | 2.6871 |
| 2.671 | 3.4 | 425 | 2.6854 |
| 2.6788 | 3.6 | 450 | 2.6870 |
| 2.673 | 3.8 | 475 | 2.6880 |
| 2.648 | 4.0 | 500 | 2.6863 |
| 2.6661 | 4.2 | 525 | 2.6866 |
| 2.6707 | 4.4 | 550 | 2.6856 |
| 2.6799 | 4.6 | 575 | 2.6870 |
| 2.673 | 4.8 | 600 | 2.6874 |
| 2.6757 | 5.0 | 625 | 2.6856 |
| 2.6658 | 5.2 | 650 | 2.6874 |
| 2.6712 | 5.4 | 675 | 2.6869 |
| 2.674 | 5.6 | 700 | 2.6866 |
| 2.6804 | 5.8 | 725 | 2.6866 |
| 2.6755 | 6.0 | 750 | 2.6872 |
| 2.685 | 6.2 | 775 | 2.6870 |
| 2.6701 | 6.4 | 800 | 2.6870 |
| 2.6893 | 6.6 | 825 | 2.6870 |
| 2.6722 | 6.8 | 850 | 2.6870 |
| 2.6783 | 7.0 | 875 | 2.6870 |
| 2.6671 | 7.2 | 900 | 2.6870 |
| 2.6691 | 7.4 | 925 | 2.6870 |
| 2.6947 | 7.6 | 950 | 2.6870 |
| 2.6773 | 7.8 | 975 | 2.6870 |
| 2.6737 | 8.0 | 1000 | 2.6870 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
CVR123/Muril-base-finetune-Tamil-questions | CVR123 | 2024-05-14T06:33:53Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/muril-base-cased",
"base_model:finetune:google/muril-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T06:33:23Z | ---
license: apache-2.0
base_model: google/muril-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: Muril-base-finetune-Tamil-questions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Muril-base-finetune-Tamil-questions
This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4081
- Precision: 0.9205
- Recall: 0.9198
- Accuracy: 0.9198
- F1-score: 0.9199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 1.5732 | 1.0 | 305 | 1.2601 | 0.8743 | 0.8858 | 0.8858 | 0.8790 |
| 0.9937 | 2.0 | 610 | 0.7465 | 0.8988 | 0.9098 | 0.9098 | 0.9033 |
| 0.5354 | 3.0 | 915 | 0.4557 | 0.9044 | 0.9158 | 0.9158 | 0.9092 |
| 0.2862 | 4.0 | 1220 | 0.3772 | 0.9198 | 0.9198 | 0.9198 | 0.9193 |
| 0.1724 | 5.0 | 1525 | 0.3306 | 0.9274 | 0.9259 | 0.9259 | 0.9261 |
| 0.1235 | 6.0 | 1830 | 0.3763 | 0.9214 | 0.9158 | 0.9158 | 0.9171 |
| 0.0902 | 7.0 | 2135 | 0.3808 | 0.9229 | 0.9218 | 0.9218 | 0.9219 |
| 0.0644 | 8.0 | 2440 | 0.3974 | 0.9229 | 0.9218 | 0.9218 | 0.9220 |
| 0.0575 | 9.0 | 2745 | 0.3930 | 0.9224 | 0.9218 | 0.9218 | 0.9218 |
| 0.0483 | 10.0 | 3050 | 0.4081 | 0.9205 | 0.9198 | 0.9198 | 0.9199 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
VRTX/bert_profanity | VRTX | 2024-05-14T06:33:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T06:33:29Z | ---
license: apache-2.0
---
|
abc88767/3sc49 | abc88767 | 2024-05-14T06:28:21Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T05:37:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/Transaminitis_L3_1000rate_1e6_SFT2 | tsavage68 | 2024-05-14T06:24:48Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T00:49:22Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Transaminitis_L3_1000rate_1e5_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transaminitis_L3_1000rate_1e5_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6005 | 0.2 | 25 | 2.3625 |
| 1.2395 | 0.4 | 50 | 0.8953 |
| 0.4393 | 0.6 | 75 | 0.4060 |
| 0.3066 | 0.8 | 100 | 0.3098 |
| 0.3 | 1.0 | 125 | 0.3041 |
| 0.2988 | 1.2 | 150 | 0.2955 |
| 0.2894 | 1.4 | 175 | 0.2894 |
| 0.2818 | 1.6 | 200 | 0.2810 |
| 0.278 | 1.8 | 225 | 0.2814 |
| 0.2716 | 2.0 | 250 | 0.2779 |
| 0.2648 | 2.2 | 275 | 0.2768 |
| 0.2628 | 2.4 | 300 | 0.2783 |
| 0.2624 | 2.6 | 325 | 0.2815 |
| 0.2635 | 2.8 | 350 | 0.2761 |
| 0.2556 | 3.0 | 375 | 0.2768 |
| 0.2408 | 3.2 | 400 | 0.2981 |
| 0.2309 | 3.4 | 425 | 0.2811 |
| 0.2461 | 3.6 | 450 | 0.2850 |
| 0.2332 | 3.8 | 475 | 0.2830 |
| 0.2428 | 4.0 | 500 | 0.2811 |
| 0.1987 | 4.2 | 525 | 0.3089 |
| 0.2113 | 4.4 | 550 | 0.3099 |
| 0.2108 | 4.6 | 575 | 0.3069 |
| 0.2068 | 4.8 | 600 | 0.3066 |
| 0.1927 | 5.0 | 625 | 0.3122 |
| 0.1758 | 5.2 | 650 | 0.3315 |
| 0.1749 | 5.4 | 675 | 0.3320 |
| 0.1751 | 5.6 | 700 | 0.3326 |
| 0.1744 | 5.8 | 725 | 0.3294 |
| 0.1698 | 6.0 | 750 | 0.3292 |
| 0.1621 | 6.2 | 775 | 0.3365 |
| 0.1532 | 6.4 | 800 | 0.3391 |
| 0.1638 | 6.6 | 825 | 0.3403 |
| 0.1587 | 6.8 | 850 | 0.3405 |
| 0.1641 | 7.0 | 875 | 0.3407 |
| 0.1659 | 7.2 | 900 | 0.3403 |
| 0.1567 | 7.4 | 925 | 0.3407 |
| 0.1626 | 7.6 | 950 | 0.3409 |
| 0.1544 | 7.8 | 975 | 0.3408 |
| 0.1611 | 8.0 | 1000 | 0.3409 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Minerva-MoE-2x3B-GGUF | mradermacher | 2024-05-14T06:13:16Z | 84 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"DeepMount00/Minerva-3B-base-RAG",
"FairMind/Minerva-3B-Instruct-v1.0",
"en",
"base_model:ludocomito/Minerva-MoE-2x3B",
"base_model:quantized:ludocomito/Minerva-MoE-2x3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T05:54:09Z | ---
base_model: ludocomito/Minerva-MoE-2x3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- DeepMount00/Minerva-3B-base-RAG
- FairMind/Minerva-3B-Instruct-v1.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ludocomito/Minerva-MoE-2x3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q2_K.gguf) | Q2_K | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.IQ3_XS.gguf) | IQ3_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q3_K_S.gguf) | Q3_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.IQ3_S.gguf) | IQ3_S | 2.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.IQ3_M.gguf) | IQ3_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q3_K_M.gguf) | Q3_K_M | 2.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q3_K_L.gguf) | Q3_K_L | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.IQ4_XS.gguf) | IQ4_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q4_K_S.gguf) | Q4_K_S | 3.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q4_K_M.gguf) | Q4_K_M | 3.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q5_K_S.gguf) | Q5_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q5_K_M.gguf) | Q5_K_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q6_K.gguf) | Q6_K | 4.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.Q8_0.gguf) | Q8_0 | 5.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-MoE-2x3B-GGUF/resolve/main/Minerva-MoE-2x3B.f16.gguf) | f16 | 10.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fine-tuned/cmedqav2-c | fine-tuned | 2024-05-14T06:09:15Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Healthcare",
"Medical",
"Treatment",
"Diagnosis",
"Advice",
"custom_code",
"zh",
"dataset:fine-tuned/cmedqav2-c",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-14T06:08:57Z | ---
license: apache-2.0
datasets:
- fine-tuned/cmedqav2-c
- allenai/c4
language:
- zh
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Healthcare
- Medical
- Treatment
- Diagnosis
- Advice
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-zh**](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh) designed for the following use case:
medical advice and treatment search engine
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/cmedqav2-c',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
theGhoul21/srl-base-irpo-gguf-q4_k_m-v0.2 | theGhoul21 | 2024-05-14T06:07:54Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai",
"base_model:quantized:theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T06:06:02Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai
---
# Uploaded model
- **Developed by:** theGhoul21
- **License:** apache-2.0
- **Finetuned from model :** theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sylviam00/output | sylviam00 | 2024-05-14T06:02:03Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-13T17:40:11Z | ---
license: openrail++
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-sylviam00/output
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: baby with black hair

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
UnclearMind/dqn-AlienNoFrameskip-v4 | UnclearMind | 2024-05-14T05:59:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AlienNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T05:58:23Z | ---
library_name: stable-baselines3
tags:
- AlienNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AlienNoFrameskip-v4
type: AlienNoFrameskip-v4
metrics:
- type: mean_reward
value: 363.00 +/- 133.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **AlienNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **AlienNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env AlienNoFrameskip-v4 -orga UnclearMind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env AlienNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env AlienNoFrameskip-v4 -orga UnclearMind -f logs/
python -m rl_zoo3.enjoy --algo dqn --env AlienNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env AlienNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env AlienNoFrameskip-v4 -f logs/ -orga UnclearMind
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 300000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Litzy619/O0503HMA10 | Litzy619 | 2024-05-14T05:58:14Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"base_model:finetune:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T04:53:55Z | ---
license: apache-2.0
base_model: allenai/OLMo-1B
tags:
- generated_from_trainer
model-index:
- name: O0503HMA10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0503HMA10
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0442 | 0.09 | 10 | 0.2875 |
| 0.2001 | 0.18 | 20 | 0.1616 |
| 0.1529 | 0.27 | 30 | 0.1659 |
| 0.1573 | 0.36 | 40 | 0.1565 |
| 0.1506 | 0.45 | 50 | 0.1481 |
| 0.1513 | 0.54 | 60 | 0.1495 |
| 0.1483 | 0.63 | 70 | 0.1473 |
| 0.1475 | 0.73 | 80 | 0.1631 |
| 0.1494 | 0.82 | 90 | 0.1451 |
| 0.1495 | 0.91 | 100 | 0.1477 |
| 0.1522 | 1.0 | 110 | 0.1458 |
| 0.2446 | 1.09 | 120 | 0.1956 |
| 0.2972 | 1.18 | 130 | 0.4052 |
| 2.1659 | 1.27 | 140 | 7.2949 |
| 0.9982 | 1.36 | 150 | 0.1835 |
| 0.1634 | 1.45 | 160 | 0.1644 |
| 0.1558 | 1.54 | 170 | 0.1487 |
| 0.1513 | 1.63 | 180 | 0.1503 |
| 0.154 | 1.72 | 190 | 0.1514 |
| 1.0121 | 1.81 | 200 | 0.1626 |
| 0.1537 | 1.9 | 210 | 0.1536 |
| 0.1494 | 1.99 | 220 | 0.1531 |
| 0.15 | 2.08 | 230 | 0.1480 |
| 0.1448 | 2.18 | 240 | 0.1480 |
| 0.1454 | 2.27 | 250 | 0.1498 |
| 0.1462 | 2.36 | 260 | 0.1493 |
| 0.1449 | 2.45 | 270 | 0.1473 |
| 0.1431 | 2.54 | 280 | 0.1468 |
| 0.1441 | 2.63 | 290 | 0.1473 |
| 0.146 | 2.72 | 300 | 0.1464 |
| 0.145 | 2.81 | 310 | 0.1462 |
| 0.1458 | 2.9 | 320 | 0.1462 |
| 0.1468 | 2.99 | 330 | 0.1462 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
parthrautV/agri_llama3 | parthrautV | 2024-05-14T05:56:59Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-14T05:42:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits