modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tliu/asp-coref-flan-t5-large
|
tliu
| 2024-01-20T08:09:32Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"dataset:conll2012_ontonotesv5",
"arxiv:2210.14698",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T08:46:21Z |
---
license: mit
datasets:
- conll2012_ontonotesv5
language:
- en
metrics:
- f1
---
# Model Card for asp-coref-flan-t5-large

# Intro
This model is initialized from flan-t5-base and finetuned for coreference resolution task.
The model structure is described in the paper [Autoregressive Structured Prediction with Language Models](https://arxiv.org/pdf/2210.14698v2.pdf),
[Github repo](https://github.com/lyutyuh/ASP).
# Model Description
- **Task:** Coreference Resolution
- **Dataset:** CoNLL 2012 OntoNotes
- **Base Model:** flan-t5-large
# Command
```bash
CUDA_VISIBLE_DEVICES=0 python evaluate_coref.py flant5_large tliu/asp-coref-flan-t5-large 0
```
|
PetroGPT/Voldemort-10B-DPO
|
PetroGPT
| 2024-01-20T07:51:33Z | 1,305 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T05:49:53Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afrideva/zephyr-220m-sft-full-GGUF
|
afrideva
| 2024-01-20T07:43:31Z | 23 | 0 | null |
[
"gguf",
"generated_from_trainer",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:BEE-spoke-data/zephyr-220m-sft-full",
"base_model:quantized:BEE-spoke-data/zephyr-220m-sft-full",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T07:42:47Z |
---
base_model: BEE-spoke-data/zephyr-220m-sft-full
datasets:
- HuggingFaceH4/ultrachat_200k
inference: false
license: apache-2.0
model-index:
- name: zephyr-220m-sft-full
results: []
model_creator: BEE-spoke-data
model_name: zephyr-220m-sft-full
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# BEE-spoke-data/zephyr-220m-sft-full-GGUF
Quantized GGUF model files for [zephyr-220m-sft-full](https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-220m-sft-full.fp16.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.fp16.gguf) | fp16 | 436.50 MB |
| [zephyr-220m-sft-full.q2_k.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q2_k.gguf) | q2_k | 94.43 MB |
| [zephyr-220m-sft-full.q3_k_m.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q3_k_m.gguf) | q3_k_m | 114.65 MB |
| [zephyr-220m-sft-full.q4_k_m.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q4_k_m.gguf) | q4_k_m | 137.58 MB |
| [zephyr-220m-sft-full.q5_k_m.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q5_k_m.gguf) | q5_k_m | 157.91 MB |
| [zephyr-220m-sft-full.q6_k.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q6_k.gguf) | q6_k | 179.52 MB |
| [zephyr-220m-sft-full.q8_0.gguf](https://huggingface.co/afrideva/zephyr-220m-sft-full-GGUF/resolve/main/zephyr-220m-sft-full.q8_0.gguf) | q8_0 | 232.28 MB |
## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-220m-sft-full
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-220M-openhermes](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes) on the Ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6447 | 1.0 | 1624 | 1.6579 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
https://wandb.ai/amazingvince/huggingface/runs/5rffzk3x/workspace?workspace=user-amazingvince
|
afrideva/TinyLlama-3T-1.1bee-GGUF
|
afrideva
| 2024-01-20T07:40:02Z | 41 | 1 | null |
[
"gguf",
"bees",
"bzz",
"honey",
"oprah winfrey",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:BEE-spoke-data/bees-internal",
"base_model:BEE-spoke-data/TinyLlama-3T-1.1bee",
"base_model:quantized:BEE-spoke-data/TinyLlama-3T-1.1bee",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T07:36:22Z |
---
base_model: BEE-spoke-data/TinyLlama-3T-1.1bee
datasets:
- BEE-spoke-data/bees-internal
inference: false
language:
- en
license: apache-2.0
metrics:
- accuracy
model_creator: BEE-spoke-data
model_name: TinyLlama-3T-1.1bee
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- bees
- bzz
- honey
- oprah winfrey
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- example_title: Queen Excluder
text: In beekeeping, the term "queen excluder" refers to
- example_title: Increasing Honey Production
text: One way to encourage a honey bee colony to produce more honey is by
- example_title: Lifecycle of a Worker Bee
text: The lifecycle of a worker bee consists of several stages, starting with
- example_title: Varroa Destructor
text: Varroa destructor is a type of mite that
- example_title: Beekeeping PPE
text: In the world of beekeeping, the acronym PPE stands for
- example_title: Robbing in Beekeeping
text: The term "robbing" in beekeeping refers to the act of
- example_title: Role of Drone Bees
text: 'Question: What''s the primary function of drone bees in a hive?
Answer:'
- example_title: Honey Harvesting Device
text: To harvest honey from a hive, beekeepers often use a device known as a
- example_title: Beekeeping Math Problem
text: 'Problem: You have a hive that produces 60 pounds of honey per year. You decide
to split the hive into two. Assuming each hive now produces at a 70% rate compared
to before, how much honey will you get from both hives next year?
To calculate'
- example_title: Swarming
text: In beekeeping, "swarming" is the process where
---
# BEE-spoke-data/TinyLlama-3T-1.1bee-GGUF
Quantized GGUF model files for [TinyLlama-3T-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-3t-1.1bee.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-3t-1.1bee.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q2_k.gguf) | q2_k | 432.13 MB |
| [tinyllama-3t-1.1bee.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q3_k_m.gguf) | q3_k_m | 548.40 MB |
| [tinyllama-3t-1.1bee.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-3t-1.1bee.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-3t-1.1bee.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-3t-1.1bee.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-3T-1.1bee-GGUF/resolve/main/tinyllama-3t-1.1bee.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyLlama-3T-1.1bee

A grand successor to [the original](https://huggingface.co/BEE-spoke-data/TinyLlama-1.1bee). This one has the following improvements:
- start from [finished 3T TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
- vastly improved and expanded SoTA beekeeping dataset
## Model description
This model is a fine-tuned version of TinyLlama-1.1b-3T on the BEE-spoke-data/bees-internal dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1640
- Accuracy: 0.5406
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 13707
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4432 | 0.19 | 50 | 2.3850 | 0.5033 |
| 2.3655 | 0.39 | 100 | 2.3124 | 0.5129 |
| 2.374 | 0.58 | 150 | 2.2588 | 0.5215 |
| 2.3558 | 0.78 | 200 | 2.2132 | 0.5291 |
| 2.2677 | 0.97 | 250 | 2.1828 | 0.5348 |
| 2.0701 | 1.17 | 300 | 2.1788 | 0.5373 |
| 2.0766 | 1.36 | 350 | 2.1673 | 0.5398 |
| 2.0669 | 1.56 | 400 | 2.1651 | 0.5402 |
| 2.0314 | 1.75 | 450 | 2.1641 | 0.5406 |
| 2.0281 | 1.95 | 500 | 2.1639 | 0.5407 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
thangvip/vi-t5-base-finetune-rewriter-2-epochs
|
thangvip
| 2024-01-20T07:26:55Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-base",
"base_model:finetune:VietAI/vit5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-20T07:05:26Z |
---
license: mit
base_model: VietAI/vit5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: vi-t5-base-finetune-rewriter-2-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-base-finetune-rewriter-2-epochs
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7955
- Bleu: 36.7726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
afrideva/beecoder-220M-python-GGUF
|
afrideva
| 2024-01-20T07:19:14Z | 41 | 0 | null |
[
"gguf",
"python",
"codegen",
"markdown",
"smol_llama",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:BEE-spoke-data/pypi_clean-deduped",
"dataset:bigcode/the-stack-smol-xl",
"dataset:EleutherAI/proof-pile-2",
"base_model:BEE-spoke-data/beecoder-220M-python",
"base_model:quantized:BEE-spoke-data/beecoder-220M-python",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-20T07:18:24Z |
---
base_model: BEE-spoke-data/beecoder-220M-python
datasets:
- BEE-spoke-data/pypi_clean-deduped
- bigcode/the-stack-smol-xl
- EleutherAI/proof-pile-2
inference: false
language:
- en
license: apache-2.0
metrics:
- accuracy
model_creator: BEE-spoke-data
model_name: beecoder-220M-python
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- python
- codegen
- markdown
- smol_llama
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- example_title: Add Numbers Function
text: "def add_numbers(a, b):\n return\n"
- example_title: Car Class
text: "class Car:\n def __init__(self, make, model):\n self.make = make\n
\ self.model = model\n\n def display_car(self):\n"
- example_title: Pandas DataFrame
text: 'import pandas as pd
data = {''Name'': [''Tom'', ''Nick'', ''John''], ''Age'': [20, 21, 19]}
df = pd.DataFrame(data).convert_dtypes()
# eda
'
- example_title: Factorial Function
text: "def factorial(n):\n if n == 0:\n return 1\n else:\n"
- example_title: Fibonacci Function
text: "def fibonacci(n):\n if n <= 0:\n raise ValueError(\"Incorrect input\")\n
\ elif n == 1:\n return 0\n elif n == 2:\n return 1\n else:\n"
- example_title: Matplotlib Plot
text: 'import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
# simple plot
'
- example_title: Reverse String Function
text: "def reverse_string(s:str) -> str:\n return\n"
- example_title: Palindrome Function
text: "def is_palindrome(word:str) -> bool:\n return\n"
- example_title: Bubble Sort Function
text: "def bubble_sort(lst: list):\n n = len(lst)\n for i in range(n):\n for
j in range(0, n-i-1):\n"
- example_title: Binary Search Function
text: "def binary_search(arr, low, high, x):\n if high >= low:\n mid =
(high + low) // 2\n if arr[mid] == x:\n return mid\n elif
arr[mid] > x:\n"
---
# BEE-spoke-data/beecoder-220M-python-GGUF
Quantized GGUF model files for [beecoder-220M-python](https://huggingface.co/BEE-spoke-data/beecoder-220M-python) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [beecoder-220m-python.fp16.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.fp16.gguf) | fp16 | 436.50 MB |
| [beecoder-220m-python.q2_k.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q2_k.gguf) | q2_k | 94.43 MB |
| [beecoder-220m-python.q3_k_m.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q3_k_m.gguf) | q3_k_m | 114.65 MB |
| [beecoder-220m-python.q4_k_m.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q4_k_m.gguf) | q4_k_m | 137.58 MB |
| [beecoder-220m-python.q5_k_m.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q5_k_m.gguf) | q5_k_m | 157.91 MB |
| [beecoder-220m-python.q6_k.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q6_k.gguf) | q6_k | 179.52 MB |
| [beecoder-220m-python.q8_0.gguf](https://huggingface.co/afrideva/beecoder-220M-python-GGUF/resolve/main/beecoder-220m-python.q8_0.gguf) | q8_0 | 232.28 MB |
## Original Model Card:
# BEE-spoke-data/beecoder-220M-python
This is `BEE-spoke-data/smol_llama-220M-GQA` fine-tuned for code generation on:
- filtered version of stack-smol-XL
- deduped version of 'algebraic stack' from proof-pile-2
- cleaned and deduped pypi (last dataset)
This model (and the base model) were both trained using ctx length 2048.
## examples
> Example script for inference testing: [here](https://gist.github.com/pszemraj/c7738f664a64b935a558974d23a7aa8c)
It has its limitations at 220M, but seems decent for single-line or docstring generation, and/or being used for speculative decoding for such purposes.

The screenshot is on CPU on a laptop.
---
|
BadBoy17G/whisper-tiny-custom-test-final
|
BadBoy17G
| 2024-01-20T07:16:24Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ta",
"dataset:customtamil",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-20T07:06:31Z |
---
language:
- ta
base_model: openai/whisper-tine
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- customtamil
model-index:
- name: Whisper Small Hi - gokulraj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - gokulraj
This model is a fine-tuned version of [openai/whisper-tine](https://huggingface.co/openai/whisper-tine) on the mydataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Jobiniah/bible-mistral-7b-merged
|
Jobiniah
| 2024-01-20T07:11:52Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-20T01:15:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thangvip/vi-t5-base-finetune-rewriter
|
thangvip
| 2024-01-20T07:00:11Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-base",
"base_model:finetune:VietAI/vit5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-20T06:48:36Z |
---
license: mit
base_model: VietAI/vit5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: vi-t5-base-finetune-rewriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-base-finetune-rewriter
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8354
- Bleu: 38.2750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/bert-base-cased-snli-model6
|
varun-v-rao
| 2024-01-20T07:00:04Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T05:59:06Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-snli-model6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-snli-model6
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2668
- Accuracy: 0.9079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3466 | 1.0 | 4292 | 0.2753 | 0.8986 |
| 0.2782 | 2.0 | 8584 | 0.2617 | 0.9060 |
| 0.2232 | 3.0 | 12876 | 0.2668 | 0.9079 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed1
|
sdpkjc
| 2024-01-20T06:55:27Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-24T08:44:20Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 4168.02 +/- 236.06
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **HalfCheetah-v4**
This is a trained model of a PPO agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed1/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 1 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'HalfCheetah-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed4
|
sdpkjc
| 2024-01-20T06:52:11Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-18T23:00:40Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 2463.90 +/- 832.71
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **HalfCheetah-v4**
This is a trained model of a PPO agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed4/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed4/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 4 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'HalfCheetah-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 4,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed4
|
sdpkjc
| 2024-01-20T06:48:38Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-19T01:49:06Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 120.62 +/- 1.83
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Swimmer-v4**
This is a trained model of a PPO agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed4/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed4/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 4 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Swimmer-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 4,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed2
|
sdpkjc
| 2024-01-20T06:48:36Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-24T07:26:30Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 1867.07 +/- 47.73
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **HalfCheetah-v4**
This is a trained model of a PPO agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed2/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-ppo_fix_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 2 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'HalfCheetah-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 2,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed1
|
sdpkjc
| 2024-01-20T06:48:34Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-24T09:52:45Z |
---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 131.51 +/- 1.19
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Swimmer-v4**
This is a trained model of a PPO agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed1/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-ppo_fix_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 1 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Swimmer-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Atipico1/popQA-base-new-hp
|
Atipico1
| 2024-01-20T06:46:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T06:46:30Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed5
|
sdpkjc
| 2024-01-20T06:33:18Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-18T20:14:53Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 2504.30 +/- 688.11
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hopper-v4**
This is a trained model of a PPO agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed5/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 5 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Hopper-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 5,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed3
|
sdpkjc
| 2024-01-20T06:31:15Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-18T20:14:54Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 2814.10 +/- 723.91
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hopper-v4**
This is a trained model of a PPO agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed3/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 3 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Hopper-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 3,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
kata958/distilbert-base-uncased-distilled-clinc
|
kata958
| 2024-01-20T06:31:09Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T05:23:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0337
- Accuracy: 0.9339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.2507 | 0.6139 |
| 0.3907 | 2.0 | 636 | 0.1147 | 0.8477 |
| 0.3907 | 3.0 | 954 | 0.0737 | 0.8952 |
| 0.1311 | 4.0 | 1272 | 0.0560 | 0.9055 |
| 0.0799 | 5.0 | 1590 | 0.0454 | 0.9245 |
| 0.0799 | 6.0 | 1908 | 0.0405 | 0.9294 |
| 0.0622 | 7.0 | 2226 | 0.0372 | 0.9303 |
| 0.0539 | 8.0 | 2544 | 0.0351 | 0.9323 |
| 0.0539 | 9.0 | 2862 | 0.0342 | 0.9326 |
| 0.0501 | 10.0 | 3180 | 0.0337 | 0.9339 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed2
|
sdpkjc
| 2024-01-20T06:30:10Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-18T20:15:37Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1865.47 +/- 529.75
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Hopper-v4**
This is a trained model of a PPO agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_fix_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed2/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 2 --track
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Hopper-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 2,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Atipico1/popQA-base-unans
|
Atipico1
| 2024-01-20T06:24:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T06:24:41Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
alexsherstinsky/mistral-7b-based-finetuned-using-ludwig-with-jigsaw-T4-4bit-notmerged
|
alexsherstinsky
| 2024-01-20T06:20:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-20T03:08:50Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
aruca/finetuning-sentiment-analysis-bert2epoch
|
aruca
| 2024-01-20T06:07:49Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T05:58:41Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-analysis-bert2epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-analysis-bert2epoch
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5203
- Accuracy: 0.7988
- F1: [0.79592826 0.76464324 0.84552102]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Arnav2612/llama2-qlora-finetunined-french
|
Arnav2612
| 2024-01-20T06:06:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-20T06:06:27Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Atipico1/NQ-cbr-unans-custom-new
|
Atipico1
| 2024-01-20T05:45:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T05:45:40Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Atipico1/NQ-cbr-unans-new
|
Atipico1
| 2024-01-20T05:45:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T05:45:04Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mlx-community/yayi2-30b-llama-hf-4bit-mlx
|
mlx-community
| 2024-01-20T05:38:52Z | 2 | 0 |
mlx
|
[
"mlx",
"llama",
"zh",
"en",
"license:other",
"region:us"
] | null | 2024-01-20T03:50:14Z |
---
language:
- zh
- en
license: other
tags:
- mlx
---
# mlx-community/yayi2-30b-llama-hf-4bit-mlx
This model was converted to MLX format from [`cognitivecomputations/yayi2-30b-llama`]().
Refer to the [original model card](https://huggingface.co/cognitivecomputations/yayi2-30b-llama) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/yayi2-30b-llama-hf-4bit-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
jeiku/Gattaca_3B
|
jeiku
| 2024-01-20T05:36:16Z | 19 | 1 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"en",
"dataset:AdamCodd/no_robots-alpaca",
"dataset:diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:FriezaForce/unranked_theory_of_mind_roleplay",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Bluemoon_cleaned_StableLM",
"base_model:merge:jeiku/Bluemoon_cleaned_StableLM",
"base_model:jeiku/Everything_v3_128_StableLM",
"base_model:merge:jeiku/Everything_v3_128_StableLM",
"base_model:jeiku/No_Robots_Alpaca_StableLM",
"base_model:merge:jeiku/No_Robots_Alpaca_StableLM",
"base_model:jeiku/RocketHermesZephyrBoros_3B",
"base_model:merge:jeiku/RocketHermesZephyrBoros_3B",
"base_model:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:jeiku/Toxic_DPO_StableLM",
"base_model:merge:jeiku/Toxic_DPO_StableLM",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-19T01:05:28Z |
---
base_model:
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Erotica_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/No_Robots_Alpaca_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Toxic_DPO_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Everything_v3_128_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Bluemoon_cleaned_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Gnosis_StableLM
- jeiku/RocketHermesZephyrBoros_3B
- jeiku/Theory_of_Mind_RP_128_StableLM
tags:
- mergekit
- merge
license: other
datasets:
- AdamCodd/no_robots-alpaca
- diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca
- totally-not-an-llm/EverythingLM-data-V3
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- FriezaForce/unranked_theory_of_mind_roleplay
language:
- en
---
# Mixed
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Erotica_StableLM](https://huggingface.co/jeiku/Erotica_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/No_Robots_Alpaca_StableLM](https://huggingface.co/jeiku/No_Robots_Alpaca_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Toxic_DPO_StableLM](https://huggingface.co/jeiku/Toxic_DPO_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
* [jeiku/RocketHermesZephyrBoros_3B](https://huggingface.co/jeiku/RocketHermesZephyrBoros_3B) + [jeiku/Theory_of_Mind_RP_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_RP_128_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 0.30
density: 0.25
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Toxic_DPO_StableLM
parameters:
weight: 0.25
density: 0.25
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Theory_of_Mind_RP_128_StableLM
parameters:
weight: 0.35
density: 0.25
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/No_Robots_Alpaca_StableLM
parameters:
weight: 0.25
density: 0.25
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Everything_v3_128_StableLM
parameters:
weight: 0.5
density: 0.5
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Gnosis_StableLM
parameters:
weight: 0.4
density: 0.4
- model: jeiku/RocketHermesZephyrBoros_3B+jeiku/Erotica_StableLM
parameters:
weight: 0.20
density: 0.20
merge_method: dare_ties
base_model: jeiku/RocketHermesZephyrBoros_3B
parameters:
dtype: bfloat16
```
|
jsmithdlc/dqn-SpaceInvadersNoFrameskip-v4
|
jsmithdlc
| 2024-01-20T05:13:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T05:07:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 225.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jsmithdlc -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jsmithdlc -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jsmithdlc
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kata958/distilbert-base-uncased-finetuned-clinc
|
kata958
| 2024-01-20T05:06:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T11:55:39Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7955
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3057 | 0.7226 |
| 3.8091 | 2.0 | 636 | 1.8921 | 0.8484 |
| 3.8091 | 3.0 | 954 | 1.1793 | 0.8929 |
| 1.7173 | 4.0 | 1272 | 0.8793 | 0.9097 |
| 0.9215 | 5.0 | 1590 | 0.7955 | 0.9174 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
krishnadasar-sudheer-kumar/ppo-CleanRL-Unit8-LunarLander-V2
|
krishnadasar-sudheer-kumar
| 2024-01-20T05:02:42Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T04:41:36Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 20.85 +/- 60.59
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 400000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'krishnadasar-sudheer-kumar/ppo-CleanRL-Unit8-LunarLander-V2'
'batch_size': 512
'minibatch_size': 128}
```
|
snowsense/food-classification-1k
|
snowsense
| 2024-01-20T04:57:55Z | 0 | 1 |
keras
|
[
"keras",
"image-classification",
"en",
"zh",
"dataset:snowsense/food-images-1k",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-12T13:14:29Z |
---
license: mit
datasets:
- snowsense/food-images-1k
language:
- en
- zh
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
---
|
varun-v-rao/bert-base-cased-snli-model4
|
varun-v-rao
| 2024-01-20T04:56:43Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T03:56:00Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-snli-model4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-snli-model4
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2721
- Accuracy: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 47
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.342 | 1.0 | 4292 | 0.2771 | 0.8972 |
| 0.2742 | 2.0 | 8584 | 0.2644 | 0.9067 |
| 0.2249 | 3.0 | 12876 | 0.2721 | 0.9077 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/zephyr-7b-sft-full-SPIN-iter3-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-20T04:42:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"arxiv:2401.01335",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T04:38:51Z |
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
pipeline_tag: text-generation
---
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
# zephyr-7b-sft-full-spin-iter3
This model is a self-play fine-tuned model at iteration 3 from [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using synthetic data based on on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
## Model Details
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: alignment-handbook/zephyr-7b-sft-full (based on mistralai/Mistral-7B-v0.1)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__test_final)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 63.70 |
| ARC (25-shot) | 66.13 |
| HellaSwag (10-shot) | 85.85 |
| MMLU (5-shot) | 61.51 |
| TruthfulQA (0-shot) | 57.89 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 34.19 |
## Citation
```
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
KhunKai05/Kai
|
KhunKai05
| 2024-01-20T04:31:07Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-01-20T04:31:07Z |
---
license: bigscience-openrail-m
---
|
LoneStriker/zephyr-7b-sft-full-SPIN-iter3-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-20T04:24:31Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"arxiv:2401.01335",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T04:22:52Z |
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
pipeline_tag: text-generation
---
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
# zephyr-7b-sft-full-spin-iter3
This model is a self-play fine-tuned model at iteration 3 from [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using synthetic data based on on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
## Model Details
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: alignment-handbook/zephyr-7b-sft-full (based on mistralai/Mistral-7B-v0.1)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__test_final)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 63.70 |
| ARC (25-shot) | 66.13 |
| HellaSwag (10-shot) | 85.85 |
| MMLU (5-shot) | 61.51 |
| TruthfulQA (0-shot) | 57.89 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 34.19 |
## Citation
```
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
macadeliccc/Orca-SOLAR-4x10.7b-GGUF
|
macadeliccc
| 2024-01-20T04:21:06Z | 8 | 1 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-18T03:19:58Z |
---
license: cc-by-nc-4.0
---
# Orca SOLAR 4x10.7b GGUF
## Overview
This model is the GGUF conversion of [macadeliccc/Orca-SOLAR-4x10.7b](https://huggingface.co/macadeliccc/Orca-SOLAR-4x10.7b)
## HF Spaces
Try it [here](https://huggingface.co/spaces/macadeliccc/Orca-SOLAR-4x10.7b-chat-GGUF)
|
LoneStriker/zephyr-7b-sft-full-SPIN-iter3-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-20T04:18:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"arxiv:2401.01335",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T04:17:31Z |
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
pipeline_tag: text-generation
---
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
# zephyr-7b-sft-full-spin-iter3
This model is a self-play fine-tuned model at iteration 3 from [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using synthetic data based on on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
## Model Details
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: alignment-handbook/zephyr-7b-sft-full (based on mistralai/Mistral-7B-v0.1)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__test_final)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 63.70 |
| ARC (25-shot) | 66.13 |
| HellaSwag (10-shot) | 85.85 |
| MMLU (5-shot) | 61.51 |
| TruthfulQA (0-shot) | 57.89 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 34.19 |
## Citation
```
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
LieDeath/MergeStove2.5D
|
LieDeath
| 2024-01-20T04:17:47Z | 70 | 39 |
diffusers
|
[
"diffusers",
"art",
"text-to-image",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-26T12:58:33Z |
---
license: cc-by-nc-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
I found a new AI tool Shakker, a best image to image tool. You can try it via https://www.shakker.ai ,it can help you:
-Remix: Upload a picture. Just switch the prompts, and you can create stunning images in the same style.
-Style Transfer: Shakker not only extracts the style,but also switches among various styles.
Besides, Shakker also offers Object Control,Composition Control,Collage Redrawing etc.
# MergeStove2.5D(融合炉2.5D)
**Hatsune Miku, Thank you.**
It's time to say goodbye to MergeStove, sayolala. Thanks for your sincerely surpport. The **MK8** maybe the last MergeStove, and if I have enough time, I will reconstruct this Readme, including the previews of MK8.
是时候和MergeStove说再见了,感谢你们的陪伴。**MK8**可能会是最后一个MergeStove模型了,如果我有时间,我会把现在的Readme重构的,包括补上MK8的预览图。
MK7 is ready!!! In memory of my college entrance exam a total year ago. For previews, ALL here for MK7, just download and enjoy it. :)
MK7版本已发布,纪念一年前我的高考。预览图已补充,下载它,你会喜欢它的。:)
**Important** Use the negatives below for best performance of MK7. Other options are also available in the Selected Negative Prompts for MK7.txt
*badhandv4, EasyNegative, verybadimagenegative_v1.3,illustration, 3d, sepia, painting, cartoons, sketch, (worst quality:1.74), (low quality:1.74), (normal quality:1.44), lowres, bad anatomy, normal quality, ((monochrome)), ((grayscale)), ((letters)), ((english)), capital*
It contains 3 negative textural embeddings, which are **badhandv4, EasyNegative, verybadimagenegative_v1.3**, each of them can easily download on huggingface.
**重要** 使用上面的负面描述词以使MK7达到最佳效果。其他的可选负面描述词可以在Selected Negative Prompts for MK7.txt内查看。
它包含3个负面嵌入Embeddings,分别是**badhandv4, EasyNegative, verybadimagenegative_v1.3**,且每个都能轻松的在huggingface上下载到。
PS: MK5 and MK6 use these configs below will be much better.
提示:MK5和MK6使用以下设置可能会更好。
*Steps: 20, Sampler: Heun, CFG scale: 7, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 3, Hires upscaler: R-ESRGAN 4x+ Anime6B, Used embeddings: EasyNegative [119b]*
**mk6 reconstructed** its base model, which change to AbyssOrangeMix2_sfw. And with models new to here, it expands its knowledges, and which be **impressive** in extra-big pictures. I hope you can love it!
**mk6版更新重构了**它本身的基础模型,其中的AbyssOrangeMix2被更换为sfw版。还有我加入了很多新模型来扩展它的知识面,这使得mk6在超大图片中表现**惊艳**。
mk5 update, specially for **chinese friends**, quite a few improvements.
mk5版更新,是专门为了**中国朋友们**准备的,有非常多的改进。
MergeStove2.5D is a **merge** stable diffusion model specialized in **anime**, which improves anatomy of anime characters, especially with **eyes** and **hands**, without losing anime objects (like substances or charaters).
Much better for working at 0.9K-1.2K resoultion, or use Hires.fix instead. In another words, before Hires.fix, long side at 0.9k-1.2k, short side at 0.5k-0.7k resolutions are better.
Provide in 6 versions. Personally mk1 works better, but mk2 give out more vivid pictures. Previous update mk3 and mk4 are proudly do better in 2.5D figures. mk3 do better in generate body, but mk4 improve scene.
融合炉2.5D是一个**动漫风格特化**的稳定扩散模型,由**多个模型融合**而来,专门改善动漫人物的身体结构,特别是**眼睛**和**手**,同时不会丢失任何动漫中的对象(物体、人物等)。
其在900-1200像素的分辨率下工作较好,或者可以使用高清修复改善其高分辨率表现。换句话说,高清修复前长边900-1200像素,短边500-700像素这样子比较好。
提供6个版本。个人感觉mk1版工作的更好,但是mk2版本能生成更生动的图像。我可以很自豪的说,先前更新的mk3和mk4在2.5D人物中表现的更好。mk3有相对较好的人体,但是mk4改进了景物表现。
**No commercial usage! 严禁商用!**
# Preview(预览)
**Updates**
**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *demon tail, butterfly, tail, bug, 1girl, long hair, wristband, shoes, hatsune miku, shirt, choker, black legwear, aqua hair, bike shorts, solo, blue butterfly, twintails, black choker, bracelet, full body, black ribbon, cow tail, very long hair, tail ornament, jewelry, black bow, hair between eyes, ahoge, white shirt, earrings, grey background, tail bow, standing, jacket, shorts, collarbone, off shoulder, short sleeves, ribbon, black footwear, aqua eyes, gradient, bow, socks, looking at viewer*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *{masterpiece}, hatsune miku, sit on sakura tree branch, floating cyan long hair, wind flow, sakura petals floating, closed eyes, sun shine upward, shadows,white long dress, cloud sky with sun, hamony and peace, bare feet, medium breast*

**mk7** (after hi-res fix at 0.45)(高清修复比率0.45) *flying sweatdrops, long hair, blue hair, hair ornament, 1girl, english text, open mouth, closed eyes, phone, smile, cellphone, uniform, necktie, gloves, bangs, solo, blush, hatsune miku*

**Previous**
**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *close-up, upper body, blue eyes black middle, snow miku stand in right side of frame, starry night with distance snow mountains scene in left side of frame, solo charater, snow stage, thick coat long dress, shinny and vivid eyes, curly long aqua hair fall on ground, medium breasts, windless, floating snows, mountain right, snow forest*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *halo, [wings], leg tie, (hathatsune) miku, full body, long legs, [[lips]], red eyes, medium breasts, (white hair), (streaked blue) hair, round face, [ahoge], black gloves, (hathatsune) miku, closed mouth, full body, straight long 2 legs, starry night, bubble nebula,, [[lips]], lace long dress, small breasts, flat chest, flowers*

**mk6** (after hi-res fix at 0.6)(高清修复比率0.6) *solo, halo, feather wings, (hathatsune) miku, fox ears, straight long 2 legs, black long silk stocking, leg ring tie, full body, [[lips]], red eyes, medium breasts, ahoge, (white hair), (streaked blue) hair, round face, black gloves, closed mouth, starry night, bubble nebula, lace long dress, medium breasts, feathers*

**mk5** (after hi-res fix at 0.7)(高清修复比率0.7) *(masterpiece), (((a girl))), ((hatsune miku)), (smiling), ((shining red medium eyes)), medium breasts, pink lips, moon in the sky, dark night, blue flowers surround one's, (blue dress), (blue long hair), stars shining, green grassland, (stream in grassland), (one's stand in the grassland), face to viewer, black higheels, long legs, full body*

**mk5** (after hi-res fix at 0.6)(高清修复比率0.6) *hatsune miku, closed mouth, full body, straight long legs, starry night, bubble nebula,, [[lips]], black long dress*

**mk1** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, ruby eyes, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, garden, seaside, ocean, bikini*

**mk1** *miku, crystal eyes, upper body, face to viewer, solo, medium breasts, soft light, outdoors, garden, seaside, beauty, blue white dress*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk2** *miku, crystal eyes, upper body, face to viewer, solo, before bookshelf, book in hands*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk3** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, surfing, on surfing board, wave, seawind, (wet body:0.75), (🏄🏻:0.66)*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *hathatsune miku, seaside, shinny eyes, medium breasts, garden, ocean, seawind, soft sunset, beauty, beach shoes, short dress*

**mk4** (after hi-res fix at 0.7)(高清修复比率0.7) *miku, seaside, shinny eyes, medium breasts, bikini, bare feet, (surfing), (on 1_surfing_board), wave, seawind, wet body, liquid on cloth, see through*

# Usage(使用方法)
Use as normal stable diffusion model package v1.x, no external yaml config is needed.
**Recommand settings: Steps: 9-28, Sampler: DPM++ SDE Karras, CFG scale: 5-16, Denoising strength: 0.6-0.7, Hires upscale: 2, Hires upscaler: Latent**
用作正常的稳定扩散模型包v1.x,无需额外的YAML配置文件。
**推荐设置:迭代步数:9-28,采样器:DPM++ SDE Karras,提示词相关性:5-16,去噪强度:0.6-0.7,高清修复放大倍率:2,高清修复放大器:Latent**
# Tags(描述词)
Positives as you like, maybe less quality words works better. You can get inspirations from upper descriptions.
**Negatives better to use the basic prompts, or just replace as bad_prompt embedding.**
**Negatives Example:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
正面填写你喜欢的描述词,也许更少的质量描述词能使其工作的更好。你可以在上面的预览图描述词中得到灵感。
**负面描述词最好用基本负面,或者简单的把它们替换成bad_prompt这个嵌入模型。**
**负面描述词示例:** *(bad_prompt), cleavage, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
**Use "blue eyes black middle" description can get huge improvement on pupil at low resolution! Colors can change as your preferance.**
**使用"blue eyes black middle"这样子的描述词可在低分辨率下极大的改善对瞳孔的描绘!颜色可以改为你喜欢的。**
Here are the **better negatives**, thanks andite: *lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
这里是**更好的负面描述词**,谢谢andite:*lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))*
From NovelAI 中文频道, I got some **even better negative prompts**. That is it, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
Note, it use the **EasyNegative** embbedings, which you need to download manually. It is also a well working filter on nsfw contants.
我在NovelAI 中文频道找到了一些**还要更好的负面描述词**。它们在这里, *EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, extra fingers, fewer fingers, strange fingers, ((bad hand)), Hand grip, (lean), Extra ears, (Four ears), Strange eyes, ((Bare nipple)), nsfw, (three arms), Many hands, (Many arms), ((watermarking)), (inaccurate limb:1.2)*
注意,它使用了**EasyNegative**这个嵌入模型,你需要手动下载它。这些描述词还能更好的过滤成人内容。
# Bias(不足)
**Notice:** Definitely important to enable the **Hires.fix**, especially on the **mk5 and mk6**. Or low quality images will be generated!!!
**注意:** 启用**高清修复**至关重要,特别是在**mk5和mk6**上。不然会产生低质量图片!!!
**include nsfw contents due to its original models!**
**DO NOT USE your generated pictures for Pirate human artists or any Internet Violence! Such as on Bilibili or Youtube.**
Sometimes long necks appear. Still hazy a bit. Under some theme will produce wrong skin gloss. Sometimes overfitting. Often produce Unhuman Size Breasts girl pictures unless use cleavage tag in negative.
**含有成人内容,由于其原始模型本身的不足!**
**请勿把你用本模型生成的图像用于嘲讽人类画师或者其他任何形式的网络暴力!例如在Bilibili或者Youtube上。**
有时会生成过长的脖子。仍然有点模糊。在某些特定场景会产生错误的皮肤光泽。有时生成的图像会过拟合训练集内版权图片。经常会生成非人类大小的乳房(USB)的女性图片,除非在负面描述词中使用cleavage这个标签。
# Formula(融合配方)
**Round1** animefull-latest(NovelAI)+64in1(Private, from a Chinese AI community NovelAI 中文频道) sum rate0.4
**Round2** ()+AbyssOrangemix2_nsfw(WarriorMama777) sum rate0.2
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk1.
**第一轮** animefull-latest(NovelAI)+64in1(私有,来自中国AI社区NovelAI 中文频道) 加权和模式 比率0.4
**第二轮** ()+AbyssOrangemix2_nsfw(WarriorMama777) 加权和模式 比率0.2
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk1模型。
**Round3A** MergeStove2.5D_mk1+Anmokomergetest1(Private, from a Chinese AI community NovelAI 中文频道, Download [Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1).) sum rate0.4
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk2.
**第三轮A** MergeStove2.5D_mk1+Anmokomergetest1(私有,来自中国AI社区NovelAI 中文频道,下载[Anmokomergetest1](https://huggingface.co/LieDeath/Anmokomergetest1)。) 加权和模式 比率0.4
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk2模型。
**Round3B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(Civitai, from saftle) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk3.
**第三轮B** MergeStove2.5D_mk1+uberRealisticPornMer_urpMv11(来自CivitAI的saftle) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk3模型。
**Round4B** MergeStove2.5D_mk3+momoko-e(Anonymous) sum rate 0.1
**Round5B** ()+Protogen_V2.2(darkstorm2150) sum rate 0.1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get MergeStove2.5D_mk4.
**第四轮B** MergeStove2.5D_mk3+momoko-e(匿名) 加权和模式 比率0.1
**第五轮B** ()+Protogen_V2.2(darkstorm2150) 加权和模式 比率0.1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,压缩为FP16格式,得到MergeStove2.5D_mk4模型。
**Round4A** MergeStove2.5D_mk2+chilloutmix_Ni(Civitai, from tasuku) sum rate 0.1
**Round5A** ()+laolei-new-berry-protogen mix(Civitai, from hokono) sum rate 0.1
**Round6A** ()+pastelmix(andite) sum rate 0.05
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk5.
**第四轮A** MergeStove2.5D_mk2+chilloutmix_Ni(来自CivitAI的tasuku) 加权和模式 比率0.1
**第五轮A** ()+laolei-new-berry-protogen mix(来自CivitAI的hokono) 加权和模式 比率0.1
**第六轮A** ()+pastelmix(andite) 加权和模式 比率0.05
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk5模型。
**Special:** AbyssOrangemix2_sfw works better at all these above MergeStove2.5D series. Only Round6A works at FP32 mode.
**注意:** AbyssOrangemix2_sfw在上面所有的MergeStove2.5D系列融合模型中工作的更好。只有第六轮A使用了FP32融合模式。
**Roundx** Replace AbyssOrangeMix2_nsfw with AbyssOrangeMix2_sfw and Reconstructed mk5 with full FP32, get modelx.
**Round7x** modelx+Nothing-V0.3(Chinese, Anonymous) sum rate 0.1
**Round8x** ()+7th_anime_v2_A(syaimu) sum rate 0.1
**Round9x** ()+mdjrny-v4(Anonymous) mbw in4 rate 1
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, get MergeStove2.5D_mk6.
**第x轮** 把AbyssOrangeMix2_nsfw替换为AbyssOrangeMix2_sfw,然后用全FP32格式重构mk5,得到modelx。
**第七轮x** modelx+Nothing-V0.3(来自中国,匿名) 加权和模式 比率0.1
**第八轮x** ()+7th_anime_v2_A(syaimu) 加权和模式 比率0.1
**第九轮x** ()+mdjrny-v4(Anonymous) MBW插件 仅调整in4层 比率1
嵌入vae-ft-mse-840000-ema-pruned(StabilityAI)这个VAE模型后,去掉EMA权重,得到MergeStove2.5D_mk6模型。
|
raj-p/bert-finetuned-ner-medical
|
raj-p
| 2024-01-20T04:10:51Z | 4 | 2 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-20T03:41:47Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: raj-p/bert-finetuned-ner-medical
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# raj-p/bert-finetuned-ner-medical
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1514
- Validation Loss: 0.2864
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3480, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3065 | 0.2755 | 0 |
| 0.1835 | 0.2722 | 1 |
| 0.1514 | 0.2864 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
coversia21/RVC_HankAnderson_Detroit
|
coversia21
| 2024-01-20T03:47:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-20T03:41:57Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/Hank_Anderson_Detroit_Become_Human.webp
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: null
license: openrail
---
# RVC_Hank Anderson [Detroit: Become Human]
<Gallery />
## Download model
[Download](/coversia21/RVC_HankAnderson_Detroit/tree/main) them in the Files & versions tab.
|
Blink15/distilbert-base-uncased-lora-text-classification
|
Blink15
| 2024-01-20T03:46:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T03:40:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
wuxiangdan9978/project1
|
wuxiangdan9978
| 2024-01-20T03:32:19Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T03:32:19Z |
---
base_model: mistralai/Mixtral-8x7B-v0.1
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
license: apache-2.0
language:
- en
---
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
(Note: If you have issues with these GGUF's try TheBloke's)
## TheBloke has also quantized Hermes Mixtral in various forms:
### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ
### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ
### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ
## There is also an MLX version available:
### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
## Exllama2 quants available here:
### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2
(other sizes available in Qeternity's repos)
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
qnguyen3/quan-1.8b-base
|
qnguyen3
| 2024-01-20T03:28:41Z | 45 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:KnutJaegersberg/Qwen-1_8B-Llamafied",
"base_model:finetune:KnutJaegersberg/Qwen-1_8B-Llamafied",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T03:26:01Z |
---
license: other
base_model: KnutJaegersberg/Qwen-1_8B-Llamafied
tags:
- generated_from_trainer
model-index:
- name: qwen-1.8b-vi-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# qwen-1.8b-vi-pt
This model is a fine-tuned version of [KnutJaegersberg/Qwen-1_8B-Llamafied](https://huggingface.co/KnutJaegersberg/Qwen-1_8B-Llamafied) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
liwii/output
|
liwii
| 2024-01-20T03:24:56Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"generated_from_trainer",
"base_model:line-corporation/line-distilbert-base-japanese",
"base_model:finetune:line-corporation/line-distilbert-base-japanese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T09:41:35Z |
---
license: apache-2.0
base_model: line-corporation/line-distilbert-base-japanese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3471
- Accuracy: 0.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 306 | 0.3968 | 0.8594 |
| 0.4221 | 2.0 | 612 | 0.3889 | 0.8594 |
| 0.4221 | 3.0 | 918 | 0.3814 | 0.8594 |
| 0.4026 | 4.0 | 1224 | 0.3775 | 0.8594 |
| 0.396 | 5.0 | 1530 | 0.3724 | 0.8594 |
| 0.396 | 6.0 | 1836 | 0.3707 | 0.8594 |
| 0.392 | 7.0 | 2142 | 0.3721 | 0.8594 |
| 0.392 | 8.0 | 2448 | 0.3653 | 0.8594 |
| 0.3898 | 9.0 | 2754 | 0.3765 | 0.8613 |
| 0.3835 | 10.0 | 3060 | 0.3572 | 0.8594 |
| 0.3835 | 11.0 | 3366 | 0.3664 | 0.8613 |
| 0.3869 | 12.0 | 3672 | 0.3568 | 0.8613 |
| 0.3869 | 13.0 | 3978 | 0.3583 | 0.8613 |
| 0.3825 | 14.0 | 4284 | 0.3526 | 0.8613 |
| 0.3813 | 15.0 | 4590 | 0.3581 | 0.8613 |
| 0.3813 | 16.0 | 4896 | 0.3553 | 0.8613 |
| 0.3759 | 17.0 | 5202 | 0.3504 | 0.8613 |
| 0.3788 | 18.0 | 5508 | 0.3490 | 0.8613 |
| 0.3788 | 19.0 | 5814 | 0.3520 | 0.8613 |
| 0.3754 | 20.0 | 6120 | 0.3450 | 0.8613 |
| 0.3754 | 21.0 | 6426 | 0.3494 | 0.8633 |
| 0.3748 | 22.0 | 6732 | 0.3491 | 0.8633 |
| 0.3775 | 23.0 | 7038 | 0.3499 | 0.8633 |
| 0.3775 | 24.0 | 7344 | 0.3494 | 0.8633 |
| 0.3748 | 25.0 | 7650 | 0.3504 | 0.8672 |
| 0.3748 | 26.0 | 7956 | 0.3495 | 0.8672 |
| 0.3701 | 27.0 | 8262 | 0.3454 | 0.8633 |
| 0.3712 | 28.0 | 8568 | 0.3472 | 0.8633 |
| 0.3712 | 29.0 | 8874 | 0.3478 | 0.8672 |
| 0.3751 | 30.0 | 9180 | 0.3471 | 0.8672 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jlbaker361/ft-ddpo25
|
jlbaker361
| 2024-01-20T03:16:42Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-19T18:33:02Z |
---
{}
---
# DDPO trained model
num_epochs=15
train_gradient_accumulation_steps=4
sample_num_steps=30
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=32
|
xiawei910/poca-SoccerTwos
|
xiawei910
| 2024-01-20T03:11:19Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-20T03:09:34Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xiawei910/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DopeorNope/Mistralopithecus-v0.1-10.8B
|
DopeorNope
| 2024-01-20T03:03:26Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:05:44Z |
---
license: cc-by-nc-sa-4.0
---
## Model Details
**Model Developers** Seungyoo Lee (DopeorNope)
이 모델은 Mistral Base의 새로운 아키텍쳐이며, 10.7B의 파라미터로 구성되었습니다. (Solar나, 시나트라 베이스 모델이 아닙니다.)
약 1.5B의 토큰으로 pretrain 되었으나, 실험단계로 향후 다시 훈련되어 새롭게 나올 예정입니다.
테스트용으로 올려봅니다.
Context length가 32k 까지지원 가능한 모델이며, 향후 더 완벽하게 설계하여 올리도록 하겠습니다.
|
ares1123/gender_classifier
|
ares1123
| 2024-01-20T03:02:33Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-20T02:49:02Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8478260636329651
---
# Age Classifier
## Example Images
#### Female

#### Male

|
baltop/cdp_600
|
baltop
| 2024-01-20T02:52:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T02:52:28Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
jhiggs/tim-robinson
|
jhiggs
| 2024-01-20T02:37:34Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-03T22:39:26Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A photo of <s0><s1> itysl a man in a yellow shirt and black jacket
output:
url: image-0.png
- text: A photo of <s0><s1> itysl a man wearing a black jacket
output:
url: image-1.png
- text: A photo of <s0><s1> itysl a man in a red suit and tie is standing in front
of a colorful background
output:
url: image-2.png
- text: A photo of <s0><s1> itysl a man with a white shirt and a black jacket
output:
url: image-3.png
- text: A photo of <s0><s1> itysl a man in a blue shirt sitting at a desk
output:
url: image-4.png
- text: A photo of <s0><s1> itysl a man with a tie on
output:
url: image-5.png
- text: A photo of <s0><s1> itysl a man with a mustache
output:
url: image-6.png
- text: A photo of <s0><s1> itysl a man in a checkered shirt holding a red ball
output:
url: image-7.png
- text: A photo of <s0><s1> itysl a man holding a box of wine in front of him
output:
url: image-8.png
- text: A photo of <s0><s1> itysl a man sitting at a desk
output:
url: image-9.png
- text: A photo of <s0><s1> itysl a man sitting at a desk
output:
url: image-10.png
- text: A photo of <s0><s1> itysl a man with a blue shirt
output:
url: image-11.png
- text: A photo of <s0><s1> itysl a man wearing a white shirt
output:
url: image-12.png
- text: A photo of <s0><s1> itysl a man with a smile on his face
output:
url: image-13.png
- text: A photo of <s0><s1> itysl a man in a plaid shirt standing in front of a wall
output:
url: image-14.png
- text: A photo of <s0><s1> itysl a man holding his head with both hands
output:
url: image-15.png
- text: A photo of <s0><s1> itysl a man with a sad face looking at something
output:
url: image-16.png
- text: A photo of <s0><s1> itysl a man in a suit and tie making a gesture
output:
url: image-17.png
- text: A photo of <s0><s1> itysl a man in a car
output:
url: image-18.png
- text: A photo of <s0><s1> itysl a man in a black jacket and white shirt standing
in an office
output:
url: image-19.png
- text: A photo of <s0><s1> itysl a man in a black jacket and blue shirt smiling
output:
url: image-20.png
- text: A photo of <s0><s1> itysl a man in glasses and a polo shirt
output:
url: image-21.png
- text: A photo of <s0><s1> itysl a man in a jacket and a tie
output:
url: image-22.png
- text: A photo of <s0><s1> itysl a man in a suit standing in front of a window
output:
url: image-23.png
- text: A photo of <s0><s1> itysl a man holding a pizza
output:
url: image-24.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1> itysl
license: openrail++
---
# SDXL LoRA DreamBooth - jhiggs/tim-robinson
<Gallery />
## Model description
### These are jhiggs/tim-robinson LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`tim-robinson.safetensors` here 💾](/jhiggs/tim-robinson/blob/main/tim-robinson.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:tim-robinson:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`tim-robinson_emb.safetensors` here 💾](/jhiggs/tim-robinson/blob/main/tim-robinson_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `tim-robinson_emb` to your prompt. For example, `A photo of tim-robinson_emb itysl`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jhiggs/tim-robinson', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='jhiggs/tim-robinson', filename='tim-robinson_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A photo of <s0><s1> itysl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/jhiggs/tim-robinson/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Chen311/Model_1.5
|
Chen311
| 2024-01-20T02:09:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-29T01:07:15Z |
---
license: creativeml-openrail-m
---
|
Tillmandev/LunarLander10m
|
Tillmandev
| 2024-01-20T01:52:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T12:04:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.15 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ntc-ai/SDXL-LoRA-slider.in-a-hot-air-balloon-race
|
ntc-ai
| 2024-01-20T01:22:27Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-20T01:22:24Z |
---
language:
- en
thumbnail: "images/evaluate/in a hot air balloon race.../in a hot air balloon race_17_3.0.png"
widget:
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_17_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_19_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_20_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_21_3.0.png
- text: in a hot air balloon race
output:
url: images/in a hot air balloon race_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "in a hot air balloon race"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - in a hot air balloon race (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/in a hot air balloon race_17_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_17_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_17_3.0.png" width=256 height=256 /> |
| <img src="images/in a hot air balloon race_19_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_19_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_19_3.0.png" width=256 height=256 /> |
| <img src="images/in a hot air balloon race_20_-3.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_20_0.0.png" width=256 height=256 /> | <img src="images/in a hot air balloon race_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
in a hot air balloon race
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.in-a-hot-air-balloon-race', weight_name='in a hot air balloon race.safetensors', adapter_name="in a hot air balloon race")
# Activate the LoRA
pipe.set_adapters(["in a hot air balloon race"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, in a hot air balloon race"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
zelihami/nlpfinalbert0
|
zelihami
| 2024-01-20T01:22:09Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-128k-uncased",
"base_model:finetune:dbmdz/bert-base-turkish-128k-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T00:33:12Z |
---
license: mit
base_model: dbmdz/bert-base-turkish-128k-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: nlpfinalbert0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlpfinalbert0
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Accuracy: 0.88
- F1: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
RatanRohith/NeuralMathChat-7B-V0.2
|
RatanRohith
| 2024-01-20T01:17:19Z | 1,362 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Q-bert/MetaMath-Cybertron-Starling",
"Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T01:13:32Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Q-bert/MetaMath-Cybertron-Starling
- Intel/neural-chat-7b-v3-3
---
# NeuralMathChat-7B-V0.2
NeuralMathChat-7B-V0.2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Q-bert/MetaMath-Cybertron-Starling
layer_range: [0, 32]
- model: Intel/neural-chat-7b-v3-3
layer_range: [0, 32]
merge_method: slerp
base_model: Q-bert/MetaMath-Cybertron-Starling
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
thrunlab/Mistral-7B-v0.1_colaMistral_scratch_cola
|
thrunlab
| 2024-01-20T00:59:07Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T00:40:54Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-7B-v0.1_colaMistral_scratch_cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1_colaMistral_scratch_cola
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4281
- Accuracy: {'accuracy': 0.8387850467289719}
- Matthews Correlation: 0.6114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 2
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------:|
| 1.9322 | 0.17 | 20 | 1.5215 | {'accuracy': 0.5743048897411314} | 0.0818 |
| 1.1953 | 0.33 | 40 | 0.9950 | {'accuracy': 0.660594439117929} | 0.1870 |
| 0.6611 | 0.5 | 60 | 0.7549 | {'accuracy': 0.7353787152444871} | 0.3527 |
| 0.6165 | 0.66 | 80 | 0.6317 | {'accuracy': 0.7583892617449665} | 0.4081 |
| 0.5467 | 0.83 | 100 | 0.5667 | {'accuracy': 0.7842761265580057} | 0.5041 |
| 0.4864 | 1.0 | 120 | 0.5268 | {'accuracy': 0.7996164908916586} | 0.5385 |
| 0.478 | 1.16 | 140 | 0.4803 | {'accuracy': 0.8283796740172579} | 0.5859 |
| 0.439 | 1.33 | 160 | 0.4965 | {'accuracy': 0.8293384467881112} | 0.5818 |
| 0.4395 | 1.49 | 180 | 0.4669 | {'accuracy': 0.8283796740172579} | 0.5778 |
| 0.4202 | 1.66 | 200 | 0.5002 | {'accuracy': 0.825503355704698} | 0.6192 |
| 0.3485 | 1.83 | 220 | 0.4360 | {'accuracy': 0.8389261744966443} | 0.6099 |
| 0.442 | 1.99 | 240 | 0.4391 | {'accuracy': 0.840843720038351} | 0.6121 |
| 0.3752 | 2.16 | 260 | 0.4306 | {'accuracy': 0.8446788111217641} | 0.6474 |
| 0.3013 | 2.32 | 280 | 0.4163 | {'accuracy': 0.8427612655800575} | 0.6216 |
| 0.3395 | 2.49 | 300 | 0.4151 | {'accuracy': 0.8542665388302972} | 0.6592 |
| 0.3305 | 2.66 | 320 | 0.4096 | {'accuracy': 0.8475551294343241} | 0.6299 |
| 0.342 | 2.82 | 340 | 0.4101 | {'accuracy': 0.8465963566634708} | 0.6322 |
| 0.3183 | 2.99 | 360 | 0.4166 | {'accuracy': 0.8494726749760306} | 0.6364 |
| 0.2551 | 3.15 | 380 | 0.4321 | {'accuracy': 0.8542665388302972} | 0.6503 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
UAEpro/whisper-small-ar-2
|
UAEpro
| 2024-01-20T00:42:47Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_16_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-15T20:58:48Z |
---
language:
- ar
license: apache-2.0
base_model: uaepro/whisper-small-ar-2
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small ar - majed test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 168.22177271055537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ar - majed test
This model is a fine-tuned version of [uaepro/whisper-small-ar-2](https://huggingface.co/uaepro/whisper-small-ar-2) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Wer: 168.2218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1459 | 0.41 | 1000 | 0.3714 | 182.4752 |
| 0.1378 | 0.82 | 2000 | 0.3486 | 177.9993 |
| 0.0738 | 1.24 | 3000 | 0.3513 | 184.2939 |
| 0.0855 | 1.65 | 4000 | 0.3392 | 168.2218 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jdang/openhermes-mistral-dpo-gptq
|
jdang
| 2024-01-20T00:35:18Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-12T17:05:27Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6104
- Rewards/chosen: -0.0458
- Rewards/rejected: -0.4535
- Rewards/accuracies: 0.6875
- Rewards/margins: 0.4077
- Logps/rejected: -390.3771
- Logps/chosen: -149.5892
- Logits/rejected: -1.3692
- Logits/chosen: -1.4352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6865 | 0.01 | 10 | 0.6792 | -0.0093 | -0.0078 | 0.6875 | -0.0015 | -385.9200 | -149.2238 | -1.3698 | -1.4189 |
| 0.6882 | 0.01 | 20 | 0.6660 | -0.0137 | -0.0526 | 0.625 | 0.0389 | -386.3681 | -149.2680 | -1.3729 | -1.4240 |
| 0.6391 | 0.01 | 30 | 0.6446 | 0.0000 | -0.1131 | 0.625 | 0.1131 | -386.9731 | -149.1310 | -1.3737 | -1.4292 |
| 0.639 | 0.02 | 40 | 0.6271 | -0.0337 | -0.2758 | 0.6875 | 0.2421 | -388.6000 | -149.4686 | -1.3729 | -1.4342 |
| 0.6533 | 0.03 | 50 | 0.6104 | -0.0458 | -0.4535 | 0.6875 | 0.4077 | -390.3771 | -149.5892 | -1.3692 | -1.4352 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
arielogg/t5-small-finetuned-en-to-fr
|
arielogg
| 2024-01-20T00:29:44Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-19T22:16:43Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: arielogg/t5-small-finetuned-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arielogg/t5-small-finetuned-en-to-fr
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1390
- Validation Loss: 0.9577
- Train Bleu: 35.5719
- Train Gen Len: 29.4217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 1.1390 | 0.9577 | 35.5719 | 29.4217 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-20T00:24:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T00:08:10Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.52
|
mu0gum
| 2024-01-20T00:17:38Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T16:44:20Z |
---
license: cc-by-nc-4.0
---
# AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.52
베이스 모델 : 42dot/42dot_LLM-PLM-1.3B
학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 28,000건 (데이터 수량 조정)
학습 방법 : Full finetuning
## ko-lm-evaluation-harness(0-shot)
|kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko|
|--|--|--|--|--|--|--|--|--|--|
|0.5826210826210826|0.68|0.436|0.7758186397984886|0.2908704883227176|0.5082228116710875|0.14225053078556263|0.39027300210119553|0.65938|0.513|
## Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF
|
segolilylabs
| 2024-01-20T00:01:19Z | 3,243 | 16 | null |
[
"gguf",
"cybersecurity",
"cyber security",
"hacking",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-12T02:13:04Z |
---
license: apache-2.0
tags:
- cybersecurity
- cyber security
- hacking
language:
- en
---
My attempt at making GGUF versions of <a href= "https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2">segolilylabs/Lily-Cybersecurity-7B-v0.2</a>
|
arnavgrg/phi2-adapter-test
|
arnavgrg
| 2024-01-19T23:56:52Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2024-01-19T23:56:22Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
brishtiteveja/bangla-llama-7b-base-v0.1
|
brishtiteveja
| 2024-01-19T23:56:22Z | 4 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2024-01-19T23:47:21Z |
---
library_name: peft
base_model: unsloth/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/WinterGoddess-1.4x-70B-L2-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T23:52:55Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T23:24:42Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
Aneeth/zephyr_7k
|
Aneeth
| 2024-01-19T23:51:26Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"region:us"
] | null | 2024-01-17T11:53:37Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-beta-GPTQ
model-index:
- name: zephyr_7k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr_7k
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3761 | 0.23 | 100 | 1.1737 |
| 0.8147 | 0.46 | 200 | 0.4469 |
| 0.3427 | 0.68 | 300 | 0.2869 |
| 0.2726 | 0.91 | 400 | 0.2630 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ
|
MaziyarPanahi
| 2024-01-19T23:48:46Z | 376 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space",
"conversational",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v0_1",
"base_model:finetune:MediaTek-Research/Breeze-7B-Instruct-v0_1"
] |
text-generation
| 2024-01-19T23:46:33Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- zh
- en
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
model_name: Breeze-7B-Instruct-v0_1-GPTQ
base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
inference: false
model_creator: MediaTek-Research
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ](https://huggingface.co/MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ) is a quantized (GPTQ) version of [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
LoneStriker/WinterGoddess-1.4x-70B-L2-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-19T23:05:01Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:52:48Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
LoneStriker/TenyxChat-8x7B-v1-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T22:47:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:32:44Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
arun100/whisper-base-hi-3
|
arun100
| 2024-01-19T22:46:48Z | 60 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:google/fleurs",
"base_model:arun100/whisper-base-hi-2",
"base_model:finetune:arun100/whisper-base-hi-2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T16:26:54Z |
---
license: apache-2.0
base_model: arun100/whisper-base-hi-2
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Base Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs hi_in
type: google/fleurs
config: hi_in
split: test
args: hi_in
metrics:
- name: Wer
type: wer
value: 27.72060783790989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [arun100/whisper-base-hi-2](https://huggingface.co/arun100/whisper-base-hi-2) on the google/fleurs hi_in dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4468
- Wer: 27.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4805 | 33.0 | 250 | 0.4868 | 30.4186 |
| 0.3559 | 66.0 | 500 | 0.4417 | 29.0909 |
| 0.2655 | 99.0 | 750 | 0.4307 | 28.2165 |
| 0.1987 | 133.0 | 1000 | 0.4350 | 27.8326 |
| 0.1472 | 166.0 | 1250 | 0.4468 | 27.7206 |
| 0.1061 | 199.0 | 1500 | 0.4640 | 28.0992 |
| 0.0767 | 233.0 | 1750 | 0.4835 | 28.5737 |
| 0.0541 | 266.0 | 2000 | 0.5032 | 28.6857 |
| 0.0396 | 299.0 | 2250 | 0.5202 | 28.7763 |
| 0.03 | 333.0 | 2500 | 0.5353 | 29.2029 |
| 0.0237 | 366.0 | 2750 | 0.5479 | 28.9096 |
| 0.0195 | 399.0 | 3000 | 0.5587 | 28.9096 |
| 0.0163 | 433.0 | 3250 | 0.5683 | 28.9469 |
| 0.014 | 466.0 | 3500 | 0.5767 | 29.1336 |
| 0.0121 | 499.0 | 3750 | 0.5838 | 29.3415 |
| 0.0108 | 533.0 | 4000 | 0.5900 | 29.2775 |
| 0.01 | 566.0 | 4250 | 0.5951 | 29.6081 |
| 0.0093 | 599.0 | 4500 | 0.5988 | 29.4855 |
| 0.0088 | 633.0 | 4750 | 0.6012 | 29.5281 |
| 0.0087 | 666.0 | 5000 | 0.6020 | 29.4268 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
LegoClipStars/River_Kendall_RH
|
LegoClipStars
| 2024-01-19T22:46:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2024-01-19T22:45:08Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: High school student
output:
url: images/5b7538c2190e21a3a865cbe703015bd6.jpg
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: Please spare me
license: cc-by-4.0
---
# River_Kendall_Rainbow_High
<Gallery />
## Model description
Here's my RVC voice model of River Kendall from Rainbow High
## Trigger words
You should use `Please spare me` to trigger the image generation.
## Download model
[Download](/LegoClipStars/River_Kendall_RH/tree/main) them in the Files & versions tab.
|
RiverTest/RiverMTG20
|
RiverTest
| 2024-01-19T22:46:01Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:RiverTest/RiverMTG15",
"base_model:adapter:RiverTest/RiverMTG15",
"region:us"
] | null | 2024-01-19T22:45:55Z |
---
library_name: peft
base_model: RiverTest/RiverMTG15
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF
|
Kooten
| 2024-01-19T22:44:15Z | 8 | 1 | null |
[
"gguf",
"en",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T19:58:51Z |
---
license: cc-by-nc-4.0
language:
- en
---
# WinterGoddess-1.4x-70B-L2 IQ2-GGUF
## Description
IQ2-GGUF quants of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XXS.gguf)
Regular GGUF Quants: [Here](https://huggingface.co/TheBloke/WinterGoddess-1.4x-70B-L2-GGUF)
## Prompt Format
### Alpaca:
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
## Contact
Kooten on discord
|
ImadSaddik/SME_EN_Ludwig_0_9_1
|
ImadSaddik
| 2024-01-19T22:41:36Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-12-29T21:38:52Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ruslanmv/TensorFlowTTS
|
ruslanmv
| 2024-01-19T22:39:55Z | 0 | 1 | null |
[
"TensorFlowTTS",
"audio",
"text-to-speech",
"text-to-mel",
"eng",
"dataset:LJSpeech",
"arxiv:1905.09263",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
tags:
- TensorFlowTTS
- audio
- text-to-speech
- text-to-mel
language: eng
license: apache-2.0
datasets:
- LJSpeech
widget:
- text: "How are you?"
---
This repository provides a pretrained [FastSpeech](https://arxiv.org/abs/1905.09263) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("ruslanmv/tensorflowtts")
fastspeech = TFAutoModel.from_pretrained("ruslanmv/tensorflowtts")
text = "How are you?"
input_ids = processor.text_to_sequence(text)
mel_before, mel_after, duration_outputs = fastspeech.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
)
```
|
coversia21/RVC_ComoTanMuchachos
|
coversia21
| 2024-01-19T22:39:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-19T22:34:55Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/muchacho.webp
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: null
license: openrail
---
# RVC_ComoTanMuchachos
<Gallery />
## Download model
[Download](/coversia21/RVC_ComoTanMuchachos/tree/main) them in the Files & versions tab.
|
ib1368/ppo-CartPole-v1-scratch
|
ib1368
| 2024-01-19T22:32:19Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T22:30:52Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.68 +/- 83.93
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ib1368/ppo-CartPole-v1-scratch'
'batch_size': 512
'minibatch_size': 128}
```
|
pervision/enchantimalistic
|
pervision
| 2024-01-19T22:31:37Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"ru",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T22:30:23Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
- ru
metrics:
- character
- bleurt
library_name: adapter-transformers
---
|
mitultiwari/llama2_instruct_generation
|
mitultiwari
| 2024-01-19T22:30:20Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-19T22:30:02Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9665 | 0.0 | 20 | 1.8063 |
| 1.9337 | 0.01 | 40 | 1.7776 |
| 1.9031 | 0.01 | 60 | 1.7639 |
| 1.8382 | 0.01 | 80 | 1.7524 |
| 1.8221 | 0.01 | 100 | 1.7358 |
| 1.8198 | 0.02 | 120 | 1.7104 |
| 1.8309 | 0.02 | 140 | 1.7001 |
| 1.8521 | 0.02 | 160 | 1.6942 |
| 1.8176 | 0.02 | 180 | 1.6924 |
| 1.8142 | 0.03 | 200 | 1.6897 |
| 1.7262 | 0.03 | 220 | 1.6878 |
| 1.7024 | 0.03 | 240 | 1.6862 |
| 1.8898 | 0.04 | 260 | 1.6845 |
| 1.7862 | 0.04 | 280 | 1.6825 |
| 1.8654 | 0.04 | 300 | 1.6832 |
| 1.7961 | 0.04 | 320 | 1.6795 |
| 1.86 | 0.05 | 340 | 1.6784 |
| 1.846 | 0.05 | 360 | 1.6793 |
| 1.8121 | 0.05 | 380 | 1.6765 |
| 1.8124 | 0.05 | 400 | 1.6748 |
| 1.8933 | 0.06 | 420 | 1.6744 |
| 1.8118 | 0.06 | 440 | 1.6734 |
| 1.7212 | 0.06 | 460 | 1.6734 |
| 1.8208 | 0.07 | 480 | 1.6727 |
| 1.83 | 0.07 | 500 | 1.6726 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/TenyxChat-8x7B-v1-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T22:22:16Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:10:33Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
|
afrideva
| 2024-01-19T22:21:52Z | 71 | 2 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"es",
"ru",
"zh",
"de",
"fr",
"th",
"ca",
"it",
"ja",
"pl",
"eo",
"eu",
"vi",
"fi",
"hu",
"ar",
"nl",
"da",
"tr",
"ko",
"he",
"id",
"cs",
"bn",
"sv",
"base_model:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"base_model:quantized:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"region:us",
"conversational"
] |
text-generation
| 2024-01-19T22:11:44Z |
---
base_model: NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2
inference: false
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
model_creator: NickyNicky
model_name: dolphin-2_6-phi-2_oasst2_chatML_V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
Quantized GGUF model files for [dolphin-2_6-phi-2_oasst2_chatML_V2](https://huggingface.co/NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2) from [NickyNicky](https://huggingface.co/NickyNicky)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf) | fp16 | 5.56 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf) | q2_k | 1.09 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf) | q3_k_m | 1.49 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf) | q6_k | 2.29 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
```
- model fine tune base: cognitivecomputations/dolphin-2_6-phi-2
- sft
- flash-attention 2
- loss: 0.85
- steps: 3000
- max_length: 2028
- neftune_noise_alpha: 5
```

Install packages
```Python
!python -m pip install --upgrade pip
!pip install -q datasets trl peft bitsandbytes sentencepiece wandb
!pip install -q accelerate safetensors deepspeed
!pip install -q scipy
!export CUDA_HOME=/usr/local/cuda-11.8
# !pip install ninja
!pip install ninja packaging --upgrade -qqq
!MAX_JOBS=4 pip install flash-attn --no-build-isolation -qqq
!pip install git+"https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary" -qqq
!python -m pip install optimum -qqq
```
Ioad model and generate text
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
# from attention_sinks import AutoModelForCausalLM
import torch
model_id = "NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2"
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
load_in_4bit=True,
low_cpu_mem_usage= True,
flash_attn=True,
flash_rotary=True,
fused_dense=True,
)
max_length=2028
print("max_length",max_length)
tokenizer = AutoTokenizer.from_pretrained(model_id,
use_fast = True,
max_length=max_length,
trust_remote_code=True,)
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=45,
repetition_penalty=1.15, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
# bos_token_id=tokenizer.eos_token_id,
# use_cache=True,
# stopping_criteria= StoppingCriteriaList([stopping_criteria]),
)
outputs = model.generate(generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
'''OUTPUT:
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant
Hay muchos tipos de alimentos que puedes probar si tienes hambre, pero aquí te muestro una lista ordenada por calor:
1- Frutas y verduras: Estas son buena opción para quitar hambre sin agotar tu cuerpo con grandes cantidades de carbohidratos. Algunas frutas y verduras que podrían ser suficientemente altas en calor durante el día incluyen tomates, plátanos, espinacas, papas, nueces, manzanas, limones, guisantes, cucumbers, zanahorias, etc.
2- Proteínas: Estas son importantes para mantener tu masa muscular y fuerzosa durante el día. Algunas proteínas que podrían ser útiles para quitar hambre durante el día incluyen carne, aceite de oliva, miel, yogur, leche fresca o sopa de gorditas, etc.
3- Carbohidratos: Estas son importantes para energizarte durante el día y mantenerte físico. Algunas frutas y verduras que podrían ser útiles para quitar hambre durante el día incluyen pan, tortillas, roti, arroz, pasta, rice, polenta, cereales, granola, etc.
4- Grains: Estas son importantes para mantenerte satiente durante el día y reducir la frecuencia de comidas rápida. Algunas gromas que podrían ser útiles para quitar hambre durante el día incluyen lentejas, farinas, tortilla, ensalada, etc.
5- Nuts y semolina: Estas son buenas opciones para quitar hambre durante el día sin agotar tu cuerpo con grandes cantidades de azúcar. Algunas frutas y verduras que podrían ser útiles para quitar hambre durante el día incluyen anacardios, almendras, macetas, bocaditos, panquesado, etc.
6- Papel picado: Esta es una opción deliciosa y económica que puedes preparar en caso de quitar hambre durante el día. Para hacer papel picado, primero cortezamos las frutas y verduras que deseas usarlas, y luego cortezamos las frutas y verduras que no deseas usarlas. A continuación, cortezamos las frutas y verduras que deseas usarlas más grandes y que estén más frescas, y luego cortezamos las frutas y verduras
'''
```
|
nbeerbower/bruphin-gamma
|
nbeerbower
| 2024-01-19T22:06:54Z | 58 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:jan-hq/supermario-v2",
"base_model:merge:jan-hq/supermario-v2",
"base_model:nbeerbower/bruphin-beta",
"base_model:merge:nbeerbower/bruphin-beta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T20:19:24Z |
---
license: apache-2.0
base_model:
- nbeerbower/bruphin-beta
- jan-hq/supermario-v2
tags:
- mergekit
- merge
---
# bruphin-gamma
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-beta](https://huggingface.co/nbeerbower/bruphin-beta)
* [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-beta
layer_range: [0, 40]
- model: jan-hq/supermario-v2
layer_range: [0, 40]
merge_method: slerp
base_model: nbeerbower/bruphin-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
timuryun/autotrain-xr1bw-vrs40
|
timuryun
| 2024-01-19T21:57:56Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T21:57:52Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
simpla360/suero
|
simpla360
| 2024-01-19T21:13:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-19T21:08:26Z |
<title>Simpla 360 Suero Antiarrugas: Revitaliza tu Piel</title>
<h1>Simpla 360 Suero Antiarrugas: Revitaliza tu Piel</h1>
Para quienes buscan rejuvenecer su piel, Simpla 360 Suero Antiarrugas es la elección perfecta. Este suero de avanzada, disponible exclusivamente en <a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>www.simpla360.com<<<</b></a>, está formulado para ofrecer resultados efectivos y visibles en la reducción de arrugas y líneas de expresión.
<a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>IR AL SITIO WEB OFICIAL AQUI<<<</b></a>
Con un precio de solo 49 USD, Simpla 360 te ofrece una solución de alta calidad para el cuidado de tu piel. Este suero antiarrugas está enriquecido con ingredientes activos que nutren, hidratan y revitalizan la piel, mejorando su elasticidad y firmeza. Es ideal para todos los tipos de piel y es perfecto para incorporar en tu rutina diaria de cuidado facial.
Haz tu pedido en <a href="https://es.mejornutra.xyz/?target=-7EBNQCgQAAAPZFwMXjAAFAQEREQoRCQoRDUIRDRIAAX9hZGNvbWJvATE&al=94332&subacc=hug"><b>>>>www.simpla360.com<<<</b></a> y comienza a experimentar los beneficios de Simpla 360 Suero Antiarrugas. Este suero no solo combate los signos del envejecimiento, sino que también deja tu piel con una apariencia más juvenil y radiante. No esperes más para darle a tu piel el cuidado que se merece. ¡Simpla 360 es tu aliado para una piel hermosa y saludable!
|
lilianz/dqn-SpaceInvadersNoFrameskip-v4
|
lilianz
| 2024-01-19T21:13:17Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T21:12:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 627.00 +/- 138.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lilianz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lilianz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lilianz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 150000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
|
MaziyarPanahi
| 2024-01-19T21:09:13Z | 30 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:finetune:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO"
] |
text-generation
| 2024-01-19T20:56:47Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- mixtral
- text-generation
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- en
- base_model:mistralai/Mixtral-8x7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
inference: false
model_creator: NousResearch
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ](https://huggingface.co/MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ) is a quantized (GPTQ) version of [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
|
andrijdavid
| 2024-01-19T21:08:30Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T20:58:33Z |
---
language:
- en
license: apache-2.0
tags:
- GGUF
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
quantized_by: andrijdavid
---
# TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
- Original model: [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF and below it, a specific filename to download, such as: TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TinyLlama-1.1B-intermediate-step-1431k-3T
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| - | | - | -- | -- | ----- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80 | 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11 |
| TinyLlama-1.1B-intermediate-step-240k-503b | 503B | 49.56 | 31.40 | 55.80 | 26.54 | 48.32 | 56.91 | 69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
<!-- original-model-card end -->
|
rheubanks/llama2_instruct_generation
|
rheubanks
| 2024-01-19T21:06:05Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-19T21:05:41Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9724 | 0.0 | 20 | 1.8100 |
| 1.8173 | 0.01 | 40 | 1.7801 |
| 1.8184 | 0.01 | 60 | 1.7671 |
| 1.8725 | 0.01 | 80 | 1.7568 |
| 1.8967 | 0.01 | 100 | 1.7460 |
| 1.8943 | 0.02 | 120 | 1.7172 |
| 1.788 | 0.02 | 140 | 1.7045 |
| 1.8953 | 0.02 | 160 | 1.6986 |
| 1.8262 | 0.02 | 180 | 1.6943 |
| 1.8472 | 0.03 | 200 | 1.6926 |
| 1.8416 | 0.03 | 220 | 1.6896 |
| 1.838 | 0.03 | 240 | 1.6855 |
| 1.7743 | 0.04 | 260 | 1.6806 |
| 1.8562 | 0.04 | 280 | 1.6785 |
| 1.8562 | 0.04 | 300 | 1.6794 |
| 1.8117 | 0.04 | 320 | 1.6783 |
| 1.8193 | 0.05 | 340 | 1.6768 |
| 1.8807 | 0.05 | 360 | 1.6745 |
| 1.7641 | 0.05 | 380 | 1.6738 |
| 1.7738 | 0.05 | 400 | 1.6735 |
| 1.7759 | 0.06 | 420 | 1.6733 |
| 1.7089 | 0.06 | 440 | 1.6721 |
| 1.7984 | 0.06 | 460 | 1.6706 |
| 1.7243 | 0.07 | 480 | 1.6720 |
| 1.9205 | 0.07 | 500 | 1.6705 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ayratmsk/distilbert-base-uncased-finetuned-emotion
|
ayratmsk
| 2024-01-19T20:27:45Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T15:40:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- name: F1
type: f1
value: 0.9187183032682423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9185
- F1: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8162 | 1.0 | 250 | 0.3287 | 0.9015 | 0.9002 |
| 0.2514 | 2.0 | 500 | 0.2206 | 0.9185 | 0.9187 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Lerik/cat_vs_dog_recognition
|
Lerik
| 2024-01-19T20:23:08Z | 0 | 0 |
fastai
|
[
"fastai",
"image-classification",
"en",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-01-18T20:53:58Z |
---
license: apache-2.0
language:
- en
library_name: fastai
pipeline_tag: image-classification
---
|
sddavicillo/wellformedjudge-google
|
sddavicillo
| 2024-01-19T20:22:30Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:google/electra-base-discriminator",
"base_model:finetune:google/electra-base-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T20:22:09Z |
---
license: apache-2.0
base_model: google/electra-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wellformedjudge-google
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wellformedjudge-google
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0576
- Accuracy: 0.7605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.059 | 1.0 | 2188 | 0.0578 | 0.7605 |
| 0.0373 | 2.0 | 4376 | 0.0552 | 0.7605 |
| 0.0216 | 3.0 | 6564 | 0.0576 | 0.7605 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF
|
andrijdavid
| 2024-01-19T20:18:29Z | 109 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"GGUF",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-01T23:03:49Z |
---
language:
- en
license: apache-2.0
tags:
- GGUF
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
widget:
- text: '<|system|>
You are a chatbot who can help code!</s>
<|user|>
Write me a function to calculate the first 10 digits of the fibonacci sequence
in Python and print it out to the CLI.</s>
<|assistant|>
'
quantized_by: andrijdavid
---
# TinyLlama-1.1B-Chat-v1.0-GGUF
- Original model: [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF and below it, a specific filename to download, such as: TinyLlama-1.1B-Chat-v1.0-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF TinyLlama-1.1B-Chat-v1.0-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF TinyLlama-1.1B-Chat-v1.0-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m TinyLlama-1.1B-Chat-v1.0-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./TinyLlama-1.1B-Chat-v1.0-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./TinyLlama-1.1B-Chat-v1.0-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TinyLlama-1.1B-Chat-v1.0
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
<!-- original-model-card end -->
|
mitro99/whisper-tiny-polyai-enUS_fewer_epochs
|
mitro99
| 2024-01-19T20:16:26Z | 60 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T20:03:49Z |
---
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-polyai-enUS_fewer_epochs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.34946871310507677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-polyai-enUS_fewer_epochs
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6145
- Wer Ortho: 0.3800
- Wer: 0.3495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 2.9576 | 3.33 | 50 | 1.9424 | 0.5077 | 0.4050 |
| 0.5132 | 6.67 | 100 | 0.6382 | 0.4152 | 0.3684 |
| 0.2569 | 10.0 | 150 | 0.5925 | 0.3893 | 0.3554 |
| 0.0973 | 13.33 | 200 | 0.6145 | 0.3800 | 0.3495 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
soniox/Soniox-7B-v1.0
|
soniox
| 2024-01-19T20:15:16Z | 1,379 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T09:16:21Z |
---
license: apache-2.0
---
# Model Card for Soniox-7B-v1.0
Soniox 7B is a powerful large language model. Supports English and code with 8K context.
Matches GPT-4 performance on some benchmarks.
Built on top of Mistral 7B, enhanced with additional pre-training and fine-tuning for strong problem-solving capabilities.
Apache 2.0 License.
For more details, please read our [blog post](https://soniox.com/news/soniox-7B).
## Usage in Transformers
The model is available in transformers and can be used as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "soniox/Soniox-7B-v1.0"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = "cuda"
model.to(device)
messages = [
{"role": "user", "content": "12 plus 21?"},
{"role": "assistant", "content": "33."},
{"role": "user", "content": "Five minus one?"},
]
tok_prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = tok_prompt.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Inference deployment
Refer to our [documentation](https://docs.soniox.com) for inference with vLLM and other
deployment options.
|
castorini/rank_zephyr_7b_v1_full
|
castorini
| 2024-01-19T19:54:29Z | 2,210 | 20 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"arxiv:2312.02724",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:52:58Z |
---
tags:
- generated_from_trainer
license: mit
language:
- en
base_model: mistralai/Mistral-7B-v0.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/castorini/rank_zephyr_7b_v1_full/resolve/main/thumbnail.jpeg" alt="RankZephyr Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> -->
# Model Card for RankZephyr 7B V1 - Full
RankZephyr is a series of language models trained to act as helpful reranking assistants built on the Zephyr-7B-β model.
RankZephyr Base is the model that follows single-stage fine-tuning on the RankGPT-3.5 model, while RankZephyr Full is the model that is further fine-tuned on RankGPT-4 reorderings of OpenAI's Ada2 orderings for 5K queries.
## Model description
- **Model type:** A 7B parameter GPT-like model initially fine-tuned on a mix of publicly available, synthetic datasets, followed by task-specific listwise reranking data.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Fine-tuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/castorini/rank_llm
- **Paper:** https://arxiv.org/abs/2312.02724
## Effectiveness
At the time of release, RankZephyr-7B-Full is the state-of-the-art open-source reranking model on various datasets like DL19/20/21/22 and TREC-COVID and TREC-News.
With the MS MARCO v1 collection:
| Model | Size | First Stage | DL19 | DL20|
|-------------|-----|----|---------------|--------------|
| **RankZephyr-7b-v1-full-rho** 🪁 | **7B** | **SPLADE++ ED** | **0.7855** | **0.8255** |
| **RankZephyr-7b-v1-full** 🪁 | **7B** | **SPLADE++ ED** | **0.7803** | **0.8211** |
| RankGPT-4 (PSC) | -| SPLADE++ ED | 0.7601 | 0.7514 |
| RankGPT-4 | -| SPLADE++ ED | 0.7464 | 0.7076 |
| **RankZephyr-7b-v1-base** 🪁 | **7B** | **SPLADE++ ED** | **0.7341** | **0.7213** |
| RankGPT-3.5 | -| SPLADE++ ED | 0.7504 | 0.7120|
More details can be found in the paper.
## Intended uses & limitations
The model is to be used in conjunction with the [RankLLM repository](https://github.com/castorini/rank_llm). While `rank-llm` exists as a PyPI package, we are currently in the early stages of development and encourage users to directly check install from source.
The original Zephyr model is trained for chat. In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.
## Bias, Risks, and Limitations
The following is an excerpt from the [Zephyr-7B-β model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md#bias-risks--limitations):
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
> Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
Our model is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.
## Citation
If you find RankZephyr is useful in your work, please cite the following paper:
```
@ARTICLE{pradeep2023rankzephyr,
title = {{RankZephyr}: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!},
author = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
year = {2023},
journal = {arXiv:2312.02724}
}
```
|
Rashik24/tinycoder-15M-instruct
|
Rashik24
| 2024-01-19T19:34:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Rashik24/tinycoder-15M",
"base_model:adapter:Rashik24/tinycoder-15M",
"region:us"
] | null | 2024-01-19T19:33:49Z |
---
library_name: peft
base_model: Rashik24/tinycoder-15M
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
wenqiglantz/MistralTrinity-7B-slerp-dpo
|
wenqiglantz
| 2024-01-19T19:24:25Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:wenqiglantz/MistralTrinity-7B-slerp",
"base_model:finetune:wenqiglantz/MistralTrinity-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:07:41Z |
---
base_model: wenqiglantz/MistralTrinity-7B-slerp
tags:
- mistral
- instruct
- finetune
- chatml
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
# MistralTrinity-7B-slerp-dpo
Inspired by @mlabonne's blog post [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac), this model was fine-tuned with DPO (Direct Preference Optimization) on base model `MistralTrinity-7B-slerp`, which is a merged model for `mistralai/Mistral-7B-Instruct-v0.2` and `jan-hq/trinity-v1`, using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset.
The code to train this model is available on [Google Colab](https://colab.research.google.com/github/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb) and [GitHub](https://github.com/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb).
It required an A100 GPU for over an hour.
Check out fine-tuning run details on [Weights & Biases](https://wandb.ai/wenqiglantz/huggingface/runs/sxbgd33f).
|
ntc-ai/SDXL-LoRA-slider.on-a-ship
|
ntc-ai
| 2024-01-19T19:22:16Z | 45 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-19T19:22:12Z |
---
language:
- en
thumbnail: "images/evaluate/on a ship.../on a ship_17_3.0.png"
widget:
- text: on a ship
output:
url: images/on a ship_17_3.0.png
- text: on a ship
output:
url: images/on a ship_19_3.0.png
- text: on a ship
output:
url: images/on a ship_20_3.0.png
- text: on a ship
output:
url: images/on a ship_21_3.0.png
- text: on a ship
output:
url: images/on a ship_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "on a ship"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - on a ship (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/on a ship_17_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_17_0.0.png" width=256 height=256 /> | <img src="images/on a ship_17_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_19_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_19_0.0.png" width=256 height=256 /> | <img src="images/on a ship_19_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_20_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_20_0.0.png" width=256 height=256 /> | <img src="images/on a ship_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
on a ship
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-a-ship', weight_name='on a ship.safetensors', adapter_name="on a ship")
# Activate the LoRA
pipe.set_adapters(["on a ship"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, on a ship"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2
|
ewqr2130
| 2024-01-19T19:21:48Z | 1,376 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T19:00:27Z |
---
license: apache-2.0
---
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.