modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-03 18:27:50
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-03 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Raneechu/textbookbig10_ft2 | Raneechu | 2024-05-26T02:17:38Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-26T02:17:35Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: textbookbig10_ft2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# textbookbig10_ft2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
kid1802/huggy_test | kid1802 | 2024-05-26T02:14:07Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-26T02:14:02Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kid1802/huggy_test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokaygokay/imageinwords-paligemma-transformers | gokaygokay | 2024-05-26T02:08:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-26T02:02:19Z | ---
license: apache-2.0
---
```
pip install git+https://github.com/huggingface/transformers
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "gokaygokay/imageinwords-paligemma-transformers"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
## prefix
prompt = "caption en"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=512, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
``` |
empathie/Qwen1.5-0.5B-Chat-experiment-2 | empathie | 2024-05-26T02:07:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T03:04:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrovejaxd/ABL_trad_j | mrovejaxd | 2024-05-26T02:03:25Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-26T00:42:17Z | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ABL_trad_j
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_trad_j
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6432
- Accuracy: 0.6883
- F1: 0.6865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.9532 | 1.0 | 1500 | 0.9116 | 0.5825 | 0.5793 |
| 0.8601 | 2.0 | 3000 | 0.8433 | 0.6033 | 0.6016 |
| 0.7962 | 3.0 | 4500 | 0.8150 | 0.6275 | 0.6252 |
| 0.7633 | 4.0 | 6000 | 0.7969 | 0.635 | 0.6334 |
| 0.7153 | 5.0 | 7500 | 0.7825 | 0.6492 | 0.6483 |
| 0.678 | 6.0 | 9000 | 0.7910 | 0.6408 | 0.6392 |
| 0.6336 | 7.0 | 10500 | 0.7772 | 0.6608 | 0.6606 |
| 0.5981 | 8.0 | 12000 | 0.7863 | 0.6617 | 0.6605 |
| 0.5455 | 9.0 | 13500 | 0.7954 | 0.6658 | 0.6657 |
| 0.4972 | 10.0 | 15000 | 0.8206 | 0.6633 | 0.6623 |
| 0.4823 | 11.0 | 16500 | 0.8442 | 0.6683 | 0.6673 |
| 0.4258 | 12.0 | 18000 | 0.8966 | 0.6742 | 0.6734 |
| 0.4182 | 13.0 | 19500 | 0.9327 | 0.6767 | 0.6761 |
| 0.3588 | 14.0 | 21000 | 0.9780 | 0.6717 | 0.6689 |
| 0.3576 | 15.0 | 22500 | 1.0288 | 0.6833 | 0.6828 |
| 0.3252 | 16.0 | 24000 | 1.0873 | 0.6842 | 0.6836 |
| 0.3104 | 17.0 | 25500 | 1.1417 | 0.685 | 0.6847 |
| 0.2691 | 18.0 | 27000 | 1.2447 | 0.6842 | 0.6827 |
| 0.2559 | 19.0 | 28500 | 1.3480 | 0.6825 | 0.6816 |
| 0.2522 | 20.0 | 30000 | 1.4782 | 0.6867 | 0.6859 |
| 0.2234 | 21.0 | 31500 | 1.5748 | 0.6833 | 0.6815 |
| 0.1954 | 22.0 | 33000 | 1.7041 | 0.69 | 0.6897 |
| 0.1979 | 23.0 | 34500 | 1.8398 | 0.6808 | 0.6789 |
| 0.176 | 24.0 | 36000 | 1.9141 | 0.6867 | 0.6860 |
| 0.1862 | 25.0 | 37500 | 2.0105 | 0.6883 | 0.6881 |
| 0.1409 | 26.0 | 39000 | 2.1345 | 0.685 | 0.6840 |
| 0.1527 | 27.0 | 40500 | 2.2039 | 0.6858 | 0.6853 |
| 0.1474 | 28.0 | 42000 | 2.2990 | 0.6933 | 0.6920 |
| 0.1428 | 29.0 | 43500 | 2.3780 | 0.6883 | 0.6878 |
| 0.1348 | 30.0 | 45000 | 2.4859 | 0.6858 | 0.6839 |
| 0.1046 | 31.0 | 46500 | 2.5546 | 0.6825 | 0.6801 |
| 0.1147 | 32.0 | 48000 | 2.6432 | 0.6883 | 0.6865 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ayoubcim/tt1-falcon-7b | ayoubcim | 2024-05-26T02:02:30Z | 155 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-26T01:35:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
antitheft159/skyelahAndriy.195 | antitheft159 | 2024-05-26T02:01:59Z | 0 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-05-26T02:01:46Z | ---
license: cc-by-sa-4.0
---
|
GENIAC-Team-Ozaki/lora-dpo-finetuned-stage4-full-sft-v4-0.5_5e-7_ep-10 | GENIAC-Team-Ozaki | 2024-05-26T01:50:24Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-26T01:38:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dfurman/Mistral-7B-Instruct-v0.3-mlx-4bit | dfurman | 2024-05-26T01:41:55Z | 9 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T01:39:22Z | ---
license: apache-2.0
tags:
- mlx
---
# dfurman/Mistral-7B-Instruct-v0.3-mlx-4bit
The Model [dfurman/Mistral-7B-Instruct-v0.3-mlx-4bit](https://huggingface.co/dfurman/Mistral-7B-Instruct-v0.3-mlx-4bit) was converted to MLX format from [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using mlx-lm version **0.14.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("dfurman/Mistral-7B-Instruct-v0.3-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-Curriculum-Subjects-8-to-10 | CMU-AIR2 | 2024-05-26T01:31:57Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-26T00:20:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mitchyAI/iveliz | mitchyAI | 2024-05-26T01:30:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-26T01:29:47Z | ---
license: creativeml-openrail-m
---
|
samwit/paligemma_vqav2 | samwit | 2024-05-26T01:30:25Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:vq_av2",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2024-05-26T01:09:51Z | ---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/paligemma-3b-pt-224
datasets:
- vq_av2
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
kataragi/controlnetXL_inpaint | kataragi | 2024-05-26T01:19:37Z | 0 | 29 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-19T13:45:19Z | ---
license: creativeml-openrail-m
---
</p>
# controlnet_inpaintXL
- これはstable DiffusionのSDXLにおいて画像の一部分を変更することのできるコントロールネットです。inpaintプリプロセッサで使用することができます。
# 使い方
コントロールネットに変更したい部分あるの画像をセットします。
プリプロセッサはinpaintに設定してください。プリプロセッサはinpaint onlyやinpaint+lumaを使ってください。
fp16バージョンの推奨モデルはanimagineXL3.1です。pony系列ではあまりうまく動作しません。
またLoraタイプ(400MB)の方はanimagineXL3.1専用です。
- 
参考設定例
- 
|
JianKim3293/llama3_lora_blossmodel | JianKim3293 | 2024-05-26T01:19:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T01:18:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
---
# Uploaded model
- **Developed by:** JianKim3293
- **License:** apache-2.0
- **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
atgarcia/wav2vec2part6 | atgarcia | 2024-05-26T01:15:47Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-25T23:57:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
asussome/xwin-finetuned-alpaca-cleaned | asussome | 2024-05-26T01:11:31Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T19:18:46Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: xwin-finetuned-alpaca-cleaned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xwin-finetuned-alpaca-cleaned
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Ichsan2895/Merak-7B-v4_4bit_q128_awq | Ichsan2895 | 2024-05-26T01:10:16Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-05-25T18:37:33Z | ---
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# HAPPY TO ANNOUNCE THE RELEASE OF MERAK-7B-V4_4bit_q128_awq!
Merak-7B is the Large Language Model of Indonesian Language
This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models.
Feel free, to ask me about the model and please share the news on your social media. |
0xjones/archillect-test | 0xjones | 2024-05-26T01:08:14Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-26T01:04:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### archillect-test Dreambooth model trained by 0xjones with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
gaalcoro/Logomarca | gaalcoro | 2024-05-26T00:57:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T00:57:47Z | ---
license: apache-2.0
---
|
DJPillu/ppo-LunarLander-v2 | DJPillu | 2024-05-26T00:53:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-26T00:52:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.36 +/- 16.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF | NikolayKozloff | 2024-05-26T00:45:03Z | 5 | 2 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T00:44:50Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF
This model was converted to GGUF format from [`fearlessdots/WizardLM-2-7B-abliterated`](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF --model wizardlm-2-7b-abliterated-q5_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF --model wizardlm-2-7b-abliterated-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m wizardlm-2-7b-abliterated-q5_0.gguf -n 128
```
|
antitheft159/blinkrgb.159 | antitheft159 | 2024-05-26T00:40:56Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-05-26T00:40:35Z | ---
license: cc-by-nc-sa-4.0
---
|
Mantis-VL/videollava-7b-video-eval-50k_2048 | Mantis-VL | 2024-05-26T00:37:11Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"video_llava",
"pretraining",
"generated_from_trainer",
"base_model:LanguageBind/Video-LLaVA-7B-hf",
"base_model:finetune:LanguageBind/Video-LLaVA-7B-hf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T14:07:00Z | ---
base_model: LanguageBind/Video-LLaVA-7B-hf
tags:
- generated_from_trainer
model-index:
- name: videollava-7b-video-eval-50k_2048
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videollava-7b-video-eval-50k_2048
This model is a fine-tuned version of [LanguageBind/Video-LLaVA-7B-hf](https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
minhz2003/test | minhz2003 | 2024-05-26T00:21:29Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T00:20:20Z | ---
license: apache-2.0
---
|
legraphista/aya-23-8B-IMat-GGUF | legraphista | 2024-05-26T00:17:38Z | 165 | 0 | gguf | [
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-8B",
"base_model:quantized:CohereForAI/aya-23-8B",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | text-generation | 2024-05-25T20:21:19Z | ---
base_model: CohereForAI/aya-23-8B
inference: false
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: gguf
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- static
---
# aya-23-8B-IMat-GGUF
_Llama.cpp imatrix quantization of CohereForAI/aya-23-8B_
Original Model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
Original dtype: `FP16` (`float16`)
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.Q8_0.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q6_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K.gguf) | Q4_K | 5.06GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K.gguf) | Q3_K | 4.22GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K.gguf) | Q2_K | 3.44GB | ✅ Available | 🟢 Yes | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.FP16.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.FP16.gguf) | F16 | 16.07GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K.gguf) | Q5_K | 5.80GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K_S.gguf) | Q5_K_S | 5.67GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K_S.gguf) | Q4_K_S | 4.83GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_L.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_L.gguf) | Q3_K_L | 4.53GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_S.gguf) | Q3_K_S | 3.87GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K_S.gguf) | Q2_K_S | 3.25GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_NL.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_NL.gguf) | IQ4_NL | 4.81GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_XS.gguf) | IQ4_XS | 4.60GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_M.gguf) | IQ3_M | 3.99GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_S.gguf) | IQ3_S | 3.89GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XS.gguf) | IQ3_XS | 3.72GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XXS.gguf) | IQ3_XXS | 3.41GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_M.gguf) | IQ2_M | 3.08GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_S.gguf) | IQ2_S | 2.90GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XS.gguf) | IQ2_XS | 2.80GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XXS.gguf) | IQ2_XXS | 2.59GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_M.gguf) | IQ1_M | 2.35GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_S.gguf) | IQ1_S | 2.21GB | ✅ Available | 🟢 Yes | 📦 No
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0/*" --local-dir aya-23-8B.Q8_0
# see FAQ for merging GGUF's
```
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `aya-23-8B.Q8_0`)
3. Run `gguf-split --merge aya-23-8B.Q8_0/aya-23-8B.Q8_0-00001-of-XXXXX.gguf aya-23-8B.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
Enpas/whisper-base-co | Enpas | 2024-05-26T00:12:44Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-23T21:48:26Z | ```
import torch
from transformers import pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
transcribe = pipeline(task="automatic-speech-recognition", model="Enpas/whisper-small-co", chunk_length_s=30, device=device)
transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="am", task="transcribe")
audio = "/content/tr_10000_tr097082.wav"
result = transcribe(audio)
print('Transcription: ', result["text"])
``` |
raulgdp/roberta-multiclase-ag_news | raulgdp | 2024-05-26T00:08:49Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-25T21:35:34Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-multiclase-ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-multiclase-ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2671
- Rmse: 1.1967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3199 | 1.0 | 15000 | 1.2671 | 1.1967 |
| 1.3837 | 2.0 | 30000 | 1.3864 | 1.2230 |
| 1.3879 | 3.0 | 45000 | 1.3865 | 1.8686 |
| 1.385 | 4.0 | 60000 | 1.3864 | 1.2247 |
| 1.3885 | 5.0 | 75000 | 1.3863 | 1.8720 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
umair894/llama3_1e | umair894 | 2024-05-25T23:58:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:58:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** umair894
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf | RichardErkhov | 2024-05-25T23:57:51Z | 23 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T21:05:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-11b-Instruct - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Llama-3-11b-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-11b-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q2_K.gguf) | Q2_K | 4.01GB |
| [Llama-3-11b-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.IQ3_XS.gguf) | IQ3_XS | 4.44GB |
| [Llama-3-11b-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.IQ3_S.gguf) | IQ3_S | 4.66GB |
| [Llama-3-11b-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.64GB |
| [Llama-3-11b-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.IQ3_M.gguf) | IQ3_M | 4.79GB |
| [Llama-3-11b-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q3_K.gguf) | Q3_K | 5.1GB |
| [Llama-3-11b-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q3_K_M.gguf) | Q3_K_M | 5.1GB |
| [Llama-3-11b-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q3_K_L.gguf) | Q3_K_L | 5.52GB |
| [Llama-3-11b-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.IQ4_XS.gguf) | IQ4_XS | 5.7GB |
| [Llama-3-11b-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q4_0.gguf) | Q4_0 | 5.94GB |
| [Llama-3-11b-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.IQ4_NL.gguf) | IQ4_NL | 6.0GB |
| [Llama-3-11b-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.98GB |
| [Llama-3-11b-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q4_K.gguf) | Q4_K | 6.27GB |
| [Llama-3-11b-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q4_K_M.gguf) | Q4_K_M | 6.27GB |
| [Llama-3-11b-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q4_1.gguf) | Q4_1 | 6.56GB |
| [Llama-3-11b-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q5_0.gguf) | Q5_0 | 7.17GB |
| [Llama-3-11b-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q5_K_S.gguf) | Q5_K_S | 7.17GB |
| [Llama-3-11b-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q5_K.gguf) | Q5_K | 7.34GB |
| [Llama-3-11b-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q5_K_M.gguf) | Q5_K_M | 7.34GB |
| [Llama-3-11b-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q5_1.gguf) | Q5_1 | 7.78GB |
| [Llama-3-11b-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q6_K.gguf) | Q6_K | 8.48GB |
| [Llama-3-11b-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Llama-3-11b-Instruct-gguf/blob/main/Llama-3-11b-Instruct.Q8_0.gguf) | Q8_0 | 10.98GB |
Original model description:
---
license: llama3
---
I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output.
Here we have a 11b Llama 3 instruct model for future work.
EDIT: Made a yaml mistake with part funnel, but it still works well.
---

This is a merge stock of 3 models:
- Part Wave
- Part Block
- Part Funnel
With Part Funnel as the base.
---
Part Wave:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 18]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [13, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [18, 32]
---
Part Block:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 15]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [16, 32]
---
Part Funnel:
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [0, 15]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [14, 14]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [13, 13]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [12, 12]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [11, 11]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [10, 10]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [9, 9]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [8, 23]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [22, 22]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [21, 21]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [20, 20]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [19, 19]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [18, 18]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [17, 17]
- sources:
- model: NousResearch/Meta-Llama-3-8B-Instruct
layer_range: [16, 32]
|
fearlessdots/Llama-3-Alpha-Centauri-v0.1 | fearlessdots | 2024-05-25T23:47:33Z | 115 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T18:00:36Z | ---
license: llama3
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
---
# Llama-3-Alpha-Centauri-v0.1
<img src="alpha_centauri_banner.png" alt="" style="width:500px;height:400px;"/>
**Image generated with [https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS).**
---
## Disclaimer
**Note:** All models and LoRAs from the **Centaurus** series were created with the sole purpose of research. The usage of this model and/or its related LoRA implies agreement with the following terms:
- The user is responsible for what they might do with it, including how the output of the model is interpreted and used;
- The user should not use the model and its outputs for any illegal purposes;
- The user is the only one resposible for any misuse or negative consequences from using this model and/or its related LoRA.
I do not endorse any particular perspectives presented in the training data.
---
## Centaurus Series
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
- Science, Technology, Engineering, and Mathematics (STEM)
- Computer Science (including programming)
- Social Sciences
And several key cognitive skills, including but not limited to:
- Reasoning and logical deduction
- Critical thinking
- Analysis
While maintaining strong overall knowledge and expertise, the models will undergo refinement through:
- Fine-tuning processes
- Model merging techniques including Mixture of Experts (MoE)
Please note that these models are experimental and may demonstrate varied levels of effectiveness. Your feedback, critique, or queries are most welcome for improvement purposes.
## Base
This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
## LoRA
The LoRA merged with the base model is available at [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA).
## GGUF
I provide some GGUF files here: [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF).
## Datasets
- [https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
## Fine Tuning
### - Quantization Configuration
- load_in_4bit=True
- bnb_4bit_quant_type="fp4"
- bnb_4bit_compute_dtype=compute_dtype
- bnb_4bit_use_double_quant=False
### - PEFT Parameters
- lora_alpha=64
- lora_dropout=0.05
- r=128
- bias="none"
### - Training Arguments
- num_train_epochs=1
- per_device_train_batch_size=1
- gradient_accumulation_steps=4
- optim="adamw_bnb_8bit"
- save_steps=25
- logging_steps=25
- learning_rate=2e-4
- weight_decay=0.001
- fp16=False
- bf16=False
- max_grad_norm=0.3
- max_steps=-1
- warmup_ratio=0.03
- group_by_length=True
- lr_scheduler_type="constant"
## Credits
- Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
- HuggingFace: for hosting this model and for creating the fine-tuning tools used;
- failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
- NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
- Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]
A huge thank you to all of them ☺️
## About Alpha Centauri
**Alpha Centauri** is a triple star system located in the constellation of **Centaurus**. It includes three stars: Rigil Kentaurus (also known as **α Centauri A**), Toliman (or **α Centauri B**), and Proxima Centauri (**α Centauri C**). Proxima Centauri is the nearest star to the Sun, residing at approximately 4.25 light-years (1.3 parsecs) away.
The primary pair, **α Centauri A** and **B**, are both similar to our Sun - **α Centauri A** being a class G star with 1.1 solar masses and 1.5 times the Sun's luminosity; **α Centauri B** having 0.9 solar masses and under half the luminosity of the Sun. They revolve around their shared center every 79 years following an elliptical path, ranging from 35.6 astronomical units apart (nearly Pluto's distance from the Sun) to 11.2 astronomical units apart (around Saturn's distance from the Sun.)
Proxima Centauri, or **α Centauri C**, is a diminutive, dim red dwarf (a class M star) initially unseen to the naked eye. At roughly 4.24 light-years (1.3 parsecs) from us, it lies nearer than **α Centauri AB**, the binary system. Presently, the gap between **Proxima Centauri** and **α Centauri AB** amounts to around 13,000 Astronomical Units (0.21 light-years)—comparable to over 430 times Neptune's orbital radius.
Two confirmed exoplanets accompany Proxima Centauri: **Proxima b**, discovered in 2016, is Earth-sized within the habitable zone; **Proxima d**, revealed in 2022, is a potential sub-Earth close to its host star. Meanwhile, disputes surround **Proxima c**, a mini-Neptune detected in 2019. Intriguingly, hints suggest that **α Centauri A** might possess a Neptune-sized object in its habitable region, but further investigation is required before confirming whether it truly exists and qualifies as a planet. Regarding **α Centauri B**, although once thought to harbor a planet (named **α Cen Bb**), subsequent research invalidated this claim, leaving it currently devoid of identified planets.
**Source:** retrived from [https://en.wikipedia.org/wiki/Alpha_Centauri](https://en.wikipedia.org/wiki/Alpha_Centauri) and processed with [https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). |
Sorour/phi3-ft-fomc-v2 | Sorour | 2024-05-25T23:45:29Z | 155 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T23:33:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuangDuy/whisper-large-v3-vivos | QuangDuy | 2024-05-25T23:40:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:40:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thdangtr/blip_recipe1m_title_v6 | thdangtr | 2024-05-25T23:35:49Z | 67 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-25T23:34:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JawadC/neufchatel | JawadC | 2024-05-25T23:33:52Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-25T23:04:54Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Neufchatel cheese
widget:
- text: A heart shaped Neufchatel cheese on a rustic wooden table.
output:
url: image_0.png
- text: A heart shaped Neufchatel cheese on a rustic wooden table.
output:
url: image_1.png
- text: A heart shaped Neufchatel cheese on a rustic wooden table.
output:
url: image_2.png
- text: A heart shaped Neufchatel cheese on a rustic wooden table.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/neufchatel
<Gallery />
## Model description
These are JawadC/neufchatel LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of Neufchatel cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/neufchatel/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
JawadC/chevre | JawadC | 2024-05-25T23:29:05Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-25T23:00:51Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of chèvre cheese
widget:
- text: A close-up shot of chèvre cheese on a rustic wooden board, with warm natural
light.
output:
url: image_0.png
- text: A close-up shot of chèvre cheese on a rustic wooden board, with warm natural
light.
output:
url: image_1.png
- text: A close-up shot of chèvre cheese on a rustic wooden board, with warm natural
light.
output:
url: image_2.png
- text: A close-up shot of chèvre cheese on a rustic wooden board, with warm natural
light.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/chevre
<Gallery />
## Model description
These are JawadC/chevre LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of chèvre cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/chevre/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ahmedgongi/Llama_dev3model_finale10 | ahmedgongi | 2024-05-25T23:26:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:26:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedgongi/Llama_dev3tokenizer_finale10 | ahmedgongi | 2024-05-25T23:26:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:26:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-Curriculum-Subjects-1-to-5 | CMU-AIR2 | 2024-05-25T23:23:21Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T16:45:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Apel-sin/llama-3-8B-iterative-DPO-final-exl2 | Apel-sin | 2024-05-25T23:19:50Z | 4 | 1 | null | [
"arxiv:2405.07863",
"arxiv:2312.11456",
"license:llama3",
"region:us"
] | null | 2024-05-24T12:41:16Z | ---
license: llama3
---
# Exllama v2 RLHFlow/LLaMA3-iterative-DPO-final
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: <a href="https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final">RLHFlow/LLaMA3-iterative-DPO-final</a><br>
Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a>
## Prompt format
```
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Apel-sin/llama-3-8B-iterative-DPO-final-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Apel-sin/llama-3-8B-iterative-DPO-final-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Apel-sin/llama-3-8B-iterative-DPO-final-exl2/tree/5_0) | 5.0 | 8.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
# LLaMA3-iterative-DPO-final
## Introduction
We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**.
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy!
## Model Releases
See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model.
- [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT)
- [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
## Dataset
- [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K)
- [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1)
## Training methods
We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
For a detailed exposition, please refer to our accompanying technical report.
## Chat Benchmarks
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
| **Small Open-Sourced Models** | | | | | |
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
| **Ours** | | | | | |
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
| Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
| **Large Open-Sourced Models** | | | | | |
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
| **Proprietary Models** | | | | | |
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
## Academic Benchmarks
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
| Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
| Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
messages = [
{"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = model_inputs.to(device)
model.to(device)
output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
model_outputs = tokenizer.batch_decode(output_tokens)
print(model_outputs[0])
```
## Limitations
RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process,
there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
## Citation
Please cite our techical report if you find our model is useful for your research or product.
```
@misc{dong2024rlhf,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{xiong2024iterative,
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
year={2024},
eprint={2312.11456},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
ayoubcim/midjourney-falcon-7b | ayoubcim | 2024-05-25T23:15:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:14:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
datek/gemma-2b-flock-1716678510 | datek | 2024-05-25T23:10:49Z | 154 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T23:08:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
szwuwen/mistral-7b-v3 | szwuwen | 2024-05-25T23:09:31Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T05:45:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** szwuwen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
student-47/wav2vec2-large-xlrs-korean-v5 | student-47 | 2024-05-25T23:06:27Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-25T07:41:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/wav2vec2-xls-r-300m
datasets:
- zeroth_korean
metrics:
- wer
model-index:
- name: wav2vec2-large-xlrs-korean-v5
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: zeroth_korean
type: zeroth_korean
config: clean
split: None
args: clean
metrics:
- type: wer
value: 0.2433368468604126
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlrs-korean-v5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the zeroth_korean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1300
- Wer: 0.2433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 5.1453 | 1.4368 | 500 | 3.1530 | 1.0 |
| 2.4287 | 2.8736 | 1000 | 0.6084 | 0.8317 |
| 0.5556 | 4.3103 | 1500 | 0.3414 | 0.6165 |
| 0.3929 | 5.7471 | 2000 | 0.2729 | 0.5386 |
| 0.3211 | 7.1839 | 2500 | 0.2294 | 0.4794 |
| 0.281 | 8.6207 | 3000 | 0.2052 | 0.4298 |
| 0.2483 | 10.0575 | 3500 | 0.1911 | 0.4061 |
| 0.2243 | 11.4943 | 4000 | 0.1685 | 0.3873 |
| 0.2023 | 12.9310 | 4500 | 0.1627 | 0.3524 |
| 0.188 | 14.3678 | 5000 | 0.1572 | 0.3272 |
| 0.1784 | 15.8046 | 5500 | 0.1495 | 0.3131 |
| 0.1677 | 17.2414 | 6000 | 0.1424 | 0.2881 |
| 0.1533 | 18.6782 | 6500 | 0.1418 | 0.2709 |
| 0.1501 | 20.1149 | 7000 | 0.1387 | 0.2822 |
| 0.1402 | 21.5517 | 7500 | 0.1401 | 0.2697 |
| 0.1353 | 22.9885 | 8000 | 0.1367 | 0.2643 |
| 0.133 | 24.4253 | 8500 | 0.1337 | 0.2578 |
| 0.1254 | 25.8621 | 9000 | 0.1355 | 0.2560 |
| 0.1262 | 27.2989 | 9500 | 0.1339 | 0.2474 |
| 0.121 | 28.7356 | 10000 | 0.1300 | 0.2433 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
diwanshus/codequalbert | diwanshus | 2024-05-25T23:04:54Z | 163 | 1 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"arxiv:1910.09700",
"doi:10.57967/hf/2308",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T04:42:05Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Model ID
CodeQualBert model is able to assess the quality of a given Python code. It can label the provided code into three quality tiers - low, average and high.
## Model Details
CodeQualBert is a fine-tuned CodeBert Model trained on CodeQual dataset.
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model card of CodeQualBert model is shown below -
- **Developed by:** Diwanshu Shekhar and Dr. Mohammad Mahoor
- **Finetuned from model [optional]:** CodeBert
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended to be used for Code Quality Assessent Task.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JawadC/brie_de_melun | JawadC | 2024-05-25T23:02:01Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-24T12:18:14Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Brie de Melun cheese
widget:
- text: A piece of Brie de Melun cheese on a rustic wooden table.
output:
url: image_0.png
- text: A piece of Brie de Melun cheese on a rustic wooden table.
output:
url: image_1.png
- text: A piece of Brie de Melun cheese on a rustic wooden table.
output:
url: image_2.png
- text: A piece of Brie de Melun cheese on a rustic wooden table.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/brie_de_melun
<Gallery />
## Model description
These are JawadC/brie_de_melun LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of Brie de Melun cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/brie_de_melun/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
JawadC/ossau-iraty | JawadC | 2024-05-25T22:59:11Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-25T22:32:28Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of OSSAU-IRATY cheese
widget:
- text: A piece of OSSAU-IRATY cheese on a rustic wooden table.
output:
url: image_0.png
- text: A piece of OSSAU-IRATY cheese on a rustic wooden table.
output:
url: image_1.png
- text: A piece of OSSAU-IRATY cheese on a rustic wooden table.
output:
url: image_2.png
- text: A piece of OSSAU-IRATY cheese on a rustic wooden table.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/ossau-iraty
<Gallery />
## Model description
These are JawadC/ossau-iraty LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of OSSAU-IRATY cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/ossau-iraty/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lesserfield/fiona-7B-v0.2 | lesserfield | 2024-05-25T22:50:03Z | 129 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:niizam/fiona-sft",
"base_model:BarraHome/Mistroll-7B-v2.2",
"base_model:finetune:BarraHome/Mistroll-7B-v2.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T03:37:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: BarraHome/Mistroll-7B-v2.2
model-index:
- name: fiona-7B-v0.2
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 69.5
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: AI2 Reasoning Challenge
type: AI2 Reasoning Challenge
value: 65.1
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: HellaSwag
type: HellaSwag
value: 85.49
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: MMLU
type: MMLU
value: 62.78
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: TruthfulQA
type: TruthfulQA
value: 56.53
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: Winogrande
type: Winogrande
value: 78.77
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
metrics:
- name: GSM8K
type: GSM8K
value: 68.31
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
datasets:
- niizam/fiona-sft
---
# Uploaded model
- **Developed by:** niizam
- **License:** apache-2.0
- **Finetuned from model :** BarraHome/Mistroll-7B-v2.2
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
LA1512/Led-pubmed-20K-4096-epoch2 | LA1512 | 2024-05-25T22:47:57Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:LA1512/Led-pubmed-20K-4096-v2",
"base_model:finetune:LA1512/Led-pubmed-20K-4096-v2",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-25T22:47:40Z | ---
license: bsd-3-clause
base_model: LA1512/Led-pubmed-20K-4096-v2
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [LA1512/Led-pubmed-20K-4096-v2](https://huggingface.co/LA1512/Led-pubmed-20K-4096-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4638 | 0.16 | 200 | 3.4713 |
| 3.3407 | 0.32 | 400 | 3.4833 |
| 3.2799 | 0.48 | 600 | 3.4805 |
| 3.3883 | 0.64 | 800 | 3.4563 |
| 3.3559 | 0.8 | 1000 | 3.4350 |
| 3.4671 | 0.96 | 1200 | 3.4248 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
G-R-A-V-I-T-Y/flan-t5-base-ARv1-ARv2 | G-R-A-V-I-T-Y | 2024-05-25T22:46:08Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:G-R-A-V-I-T-Y/flan-t5-base-ARv1",
"base_model:finetune:G-R-A-V-I-T-Y/flan-t5-base-ARv1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-25T22:19:47Z | ---
license: apache-2.0
base_model: G-R-A-V-I-T-Y/flan-t5-base-ARv1
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-ARv1-ARv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-ARv1-ARv2
This model is a fine-tuned version of [G-R-A-V-I-T-Y/flan-t5-base-ARv1](https://huggingface.co/G-R-A-V-I-T-Y/flan-t5-base-ARv1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7981
- Exact Match: 10.0
- Gen Len: 4.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| No log | 1.0 | 7 | 0.7981 | 10.0 | 4.0 |
| No log | 2.0 | 14 | 0.7982 | 10.0 | 4.0 |
| No log | 3.0 | 21 | 0.7982 | 10.0 | 4.0 |
| No log | 4.0 | 28 | 0.7982 | 10.0 | 4.0 |
| No log | 5.0 | 35 | 0.7982 | 10.0 | 4.0 |
| No log | 6.0 | 42 | 0.7982 | 10.0 | 4.0 |
| No log | 7.0 | 49 | 0.7982 | 10.0 | 4.0 |
| No log | 8.0 | 56 | 0.7982 | 10.0 | 4.0 |
| No log | 9.0 | 63 | 0.7982 | 10.0 | 4.0 |
| No log | 10.0 | 70 | 0.7982 | 10.0 | 4.0 |
| No log | 11.0 | 77 | 0.7983 | 10.0 | 4.0 |
| No log | 12.0 | 84 | 0.7983 | 10.0 | 4.0 |
| No log | 13.0 | 91 | 0.7983 | 10.0 | 4.0 |
| No log | 14.0 | 98 | 0.7983 | 10.0 | 4.0 |
| No log | 15.0 | 105 | 0.7984 | 10.0 | 4.0 |
| No log | 16.0 | 112 | 0.7984 | 10.0 | 4.0 |
| No log | 17.0 | 119 | 0.7984 | 10.0 | 4.0 |
| No log | 18.0 | 126 | 0.7984 | 10.0 | 4.0 |
| No log | 19.0 | 133 | 0.7984 | 10.0 | 4.0 |
| No log | 20.0 | 140 | 0.7984 | 10.0 | 4.0 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Robi21/Meta-Llama-3-8B-Q4_K_M-GGUF | Robi21 | 2024-05-25T22:45:51Z | 2 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T22:45:32Z | ---
language:
- en
license: llama3
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# Robi21/Meta-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Robi21/Meta-Llama-3-8B-Q4_K_M-GGUF --model meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Robi21/Meta-Llama-3-8B-Q4_K_M-GGUF --model meta-llama-3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m meta-llama-3-8b-q4_k_m.gguf -n 128
```
|
arinakosovskaia/implicit_toxicity | arinakosovskaia | 2024-05-25T22:35:04Z | 112 | 6 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"ru",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-10T14:36:27Z | ---
language:
- ru
license: apache-2.0
library_name: transformers
metrics:
- precision
- recall
- f1
pipeline_tag: text-classification
---
# Model Card for Model ID
Detect implicit toxicity in Russian (details will be later :))
```
import torch
from transformers import BertTokenizer, BertForSequenceClassification
text = <your_text>
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = 'arinakosovskaia/implicit_toxicity'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name).to(device)
encoded_text = tokenizer.encode(text, return_tensors='pt').to(device)
outputs = model(encoded_text)
logits = outputs[0]
prob = torch.nn.functional.softmax(logits, dim=1)[:, 1]
prob.cpu().detach().numpy()[0]
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quangtqv/cross_encoder_tool_learning_backbone_beta_26_5 | quangtqv | 2024-05-25T22:29:28Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-25T22:28:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MuniebAbdelrahman/bert-finetuned-squad | MuniebAbdelrahman | 2024-05-25T22:24:28Z | 72 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-25T17:47:49Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: MuniebAbdelrahman/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MuniebAbdelrahman/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2732
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2732 | 0 |
### Framework versions
- Transformers 4.41.0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vyctorbh/phi-3-medium-alpaca-pt | vyctorbh | 2024-05-25T22:04:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Phi-3-medium-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T21:17:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-medium-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** vyctorbh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oskarandrsson/wav2vec2-2-bert-swedish-lm | oskarandrsson | 2024-05-25T22:01:18Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"sv",
"dataset:common_voice_17_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T13:28:40Z | ---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: w2v-bert-2.0-sv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: sv-SE
split: test
args: sv-SE
metrics:
- name: Wer
type: wer
value: 0.10046931592103249
language:
- sv
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-sv
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1962
- Wer: 0.1005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.075 | 0.7407 | 300 | 0.3441 | 0.3057 |
| 0.2837 | 1.4815 | 600 | 0.2995 | 0.2274 |
| 0.2081 | 2.2222 | 900 | 0.2443 | 0.1768 |
| 0.1579 | 2.9630 | 1200 | 0.2143 | 0.1493 |
| 0.1248 | 3.7037 | 1500 | 0.2165 | 0.1504 |
| 0.0934 | 4.4444 | 1800 | 0.1869 | 0.1284 |
| 0.0719 | 5.1852 | 2100 | 0.2072 | 0.1216 |
| 0.0573 | 5.9259 | 2400 | 0.1949 | 0.1195 |
| 0.0436 | 6.6667 | 2700 | 0.2025 | 0.1142 |
| 0.0317 | 7.4074 | 3000 | 0.2003 | 0.1097 |
| 0.0256 | 8.1481 | 3300 | 0.1942 | 0.1060 |
| 0.0169 | 8.8889 | 3600 | 0.1851 | 0.1030 |
| 0.0121 | 9.6296 | 3900 | 0.1962 | 0.1005 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
devjwsong/reinforce-Pixelcopter-PLE-v0 | devjwsong | 2024-05-25T21:56:14Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-25T13:58:50Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.15 +/- 29.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
krishi0311/gpt2-wikitext2 | krishi0311 | 2024-05-25T21:52:52Z | 224 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T21:10:08Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.551 | 1.0 | 2249 | 6.4712 |
| 6.1869 | 2.0 | 4498 | 6.1959 |
| 6.0103 | 3.0 | 6747 | 6.1121 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
rishiA/my_awesome_mind_model | rishiA | 2024-05-25T21:37:58Z | 168 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-25T21:32:49Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.07964601769911504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6532
- Accuracy: 0.0796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6368 | 0.0531 |
| No log | 1.8667 | 7 | 2.6427 | 0.0708 |
| 2.6403 | 2.9333 | 11 | 2.6464 | 0.0619 |
| 2.6403 | 4.0 | 15 | 2.6446 | 0.0619 |
| 2.6403 | 4.8 | 18 | 2.6427 | 0.0442 |
| 2.6381 | 5.8667 | 22 | 2.6478 | 0.0796 |
| 2.6381 | 6.9333 | 26 | 2.6526 | 0.0708 |
| 2.6304 | 8.0 | 30 | 2.6532 | 0.0796 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
FuturisticVibes/aya-23-35B-8.0bpw-h8-exl2 | FuturisticVibes | 2024-05-25T21:27:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-25T21:18:00Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
aya-23-35B 8.0bpw h8 EXL2
Includes [measurement.json]( https://huggingface.co/FuturisticVibes/aya-23-35B-8.0bpw-h8-exl2/tree/measurement) file for further quantization
Original Model: https://huggingface.co/CohereForAI/aya-23-35B
# Original Model Card
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-35B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
Vikhrmodels/kolibri-vikhr-mistral-0427 | Vikhrmodels | 2024-05-25T21:23:21Z | 198 | 3 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"conversational",
"en",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T13:20:03Z | ---
license: apache-2.0
language:
- en
- ru
---
## Description
This is an instruction following model (based on Vikhr Base) optimized for Russian language.
It was trained using [kolibrify](https://github.com/oKatanaaa/kolibrify) via curriculum learning on a multitude of instruction datasets.
The model uses ChatML template. It was trained to be sensitive to the system prompt, experiment with it.
Primary uses: information retrieval, classification, semantic analysis, translation (en-ru), etc. Currently in pre-alpha.
## Instruction following evals
The model was tested using the following benchmarks:
- [ruIFEval](https://github.com/NLP-Core-Team/ruIFEval)
- [ifeval](https://github.com/google-research/google-research/tree/master/instruction_following_eval)
| Eval name |Strict Value| Loose Value
|---------------------------------|----|----|
|Avg. |*37.82*|*39.24*|
|ifeval-prompt-level |33.08|35.12|
|ifeval-instruction-level |44.48|46.40|
|ru-ifeval-prompt-level |31.79|32.53|
|ru-ifeval-instruction-level |41.96|42.92|
|
camidenecken/mistral-7B-v0.2-quant | camidenecken | 2024-05-25T21:18:39Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-25T21:14:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JOY-ZHE/bloomz-560m_PROMPT_TUNING_CAUSAL_LM | JOY-ZHE | 2024-05-25T21:16:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T21:16:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fombus/mistral_history | fombus | 2024-05-25T21:14:29Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-25T13:15:05Z | ---
license: apache-2.0
---
|
cstr/Llama3-DiscoLeo-Instruct-8B-v0.1-mlx | cstr | 2024-05-25T21:01:39Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"de",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T20:42:48Z | ---
language:
- de
license: llama3
library_name: transformers
tags:
- mlx
---
# cstr/Llama3-DiscoLeo-Instruct-8B-v0.1-mlx
The Model [cstr/Llama3-DiscoLeo-Instruct-8B-v0.1-mlx](https://huggingface.co/cstr/Llama3-DiscoLeo-Instruct-8B-v0.1-mlx) was converted to MLX format from [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1) using mlx-lm version **0.14.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cstr/Llama3-DiscoLeo-Instruct-8B-v0.1-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
LarryAIDraw/kamuromasumi-nvwls-v1 | LarryAIDraw | 2024-05-25T20:59:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-25T20:53:27Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/472464/masumi-kamuro-classroom-of-the-elite-lora |
LarryAIDraw/hanying-nvwls-v1 | LarryAIDraw | 2024-05-25T20:59:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-25T20:53:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/472463/hanying-punishing-gray-raven-lora |
LarryAIDraw/y4ngy4ng | LarryAIDraw | 2024-05-25T20:58:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-25T20:51:17Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/472508/yangyang-wuthering-waves?modelVersionId=525636 |
HaileyStorm/llama3-5.4b-instruct-unhealed | HaileyStorm | 2024-05-25T20:58:14Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"prune",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T23:02:34Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
- prune
---
# merged
This is a "merge" of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B parameter.
Mostly, this is a test of pruning & healing an instruct-tuned model.
THIS MODEL HAS NOT BEEN HEALED. It is presently unusable. The healed version will be in a different repository.
This size should allow bf16 inference on 24GB VRAM, Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and fine-tuning ... well, with less VRAM than an 8B model.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [20, 21]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [29, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
```
|
LarryAIDraw/Firefly_v1 | LarryAIDraw | 2024-05-25T20:58:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-25T20:49:52Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/474862/firefly-honkai-star-rail |
Project-Sirus/AI | Project-Sirus | 2024-05-25T20:57:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-25T20:57:11Z | ---
license: apache-2.0
---
|
seregadgl101/baii_rerank_v20_1ep | seregadgl101 | 2024-05-25T20:56:59Z | 12 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-25T20:55:41Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# seregadgl101/baii_rerank_v20_1ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_rerank_v20_1ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('seregadgl101/baii_rerank_v20_1ep')
model = AutoModel.from_pretrained('seregadgl101/baii_rerank_v20_1ep')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_rerank_v20_1ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf | RichardErkhov | 2024-05-25T20:54:48Z | 2 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T17:49:38Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Synatra-11B-Tb2M_SM - GGUF
- Model creator: https://huggingface.co/maywell/
- Original model: https://huggingface.co/maywell/Synatra-11B-Tb2M_SM/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Synatra-11B-Tb2M_SM.Q2_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q2_K.gguf) | Q2_K | 3.73GB |
| [Synatra-11B-Tb2M_SM.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Synatra-11B-Tb2M_SM.IQ3_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Synatra-11B-Tb2M_SM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Synatra-11B-Tb2M_SM.IQ3_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Synatra-11B-Tb2M_SM.Q3_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q3_K.gguf) | Q3_K | 4.84GB |
| [Synatra-11B-Tb2M_SM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Synatra-11B-Tb2M_SM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Synatra-11B-Tb2M_SM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Synatra-11B-Tb2M_SM.Q4_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Synatra-11B-Tb2M_SM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Synatra-11B-Tb2M_SM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Synatra-11B-Tb2M_SM.Q4_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q4_K.gguf) | Q4_K | 6.02GB |
| [Synatra-11B-Tb2M_SM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Synatra-11B-Tb2M_SM.Q4_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Synatra-11B-Tb2M_SM.Q5_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Synatra-11B-Tb2M_SM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Synatra-11B-Tb2M_SM.Q5_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q5_K.gguf) | Q5_K | 7.08GB |
| [Synatra-11B-Tb2M_SM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Synatra-11B-Tb2M_SM.Q5_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Synatra-11B-Tb2M_SM.Q6_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q6_K.gguf) | Q6_K | 8.2GB |
| [Synatra-11B-Tb2M_SM.Q8_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-11B-Tb2M_SM-gguf/blob/main/Synatra-11B-Tb2M_SM.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **Synatra-11B-Tb2M-SM**
Made by StableFluffy
**Contact (Do not Contact for personal things.)**
Discord : is.maywell
Telegram : AlzarTakkarsen
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **MISTRAL APACHE 2.0**.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
## Model Details
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
teknium/CollectiveCognition-v1.1-Mistral-7B, Apache 2.0
**Trained On**
A100 80GB * 4
# **Model Benchmark**
X
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
---
|
Mouwiya/kinetics-400 | Mouwiya | 2024-05-25T20:53:02Z | 6 | 0 | tf-keras | [
"tf-keras",
"video-classification",
"en",
"dataset:ucf101",
"license:apache-2.0",
"model-index",
"region:us"
] | video-classification | 2024-05-25T20:43:56Z | ---
language: en
tags:
- video-classification
license: apache-2.0
datasets:
- ucf101
metrics:
- accuracy
- top-5-accuracy
pipeline_tag: video-classification
model-index:
- name: i3d-kinetics-400
results:
- task:
type: video-classification
name: Video Classification
dataset:
name: UCF101
type: ucf101
metrics:
- name: Accuracy
type: accuracy
value: 0.95
- name: Top-5 Accuracy
type: top-5-accuracy
value: 0.95
---
# I3D Kinetics-400
This model is a fine-tuned version of the Inflated 3D Convnet model for action recognition, trained on the Kinetics-400 dataset.
## Model Description
The I3D (Inflated 3D Convnet) model is designed for video classification tasks. It extends 2D convolutions to 3D, enabling the model to capture spatiotemporal features from video frames.
## Intended Uses
The model can be used for action recognition in videos. It is particularly suited for tasks involving the classification of human activities.
## Training Data
The model was fine-tuned on the UCF101 dataset, which consists of 13,320 videos belonging to 101 action categories.
## Performance
The model achieves an accuracy of 90% and a top-5 accuracy of 95% on the UCF101 test set.
## Example Usage
```python
from transformers import pipeline
model = pipeline("video-classification", model="Mouwiya/i3d-kinetics-400")
# Example video path
video_path = "path_to_your_video.mp4"
# Perform video classification
results = model(video_path)
print(results)
``` |
Mouwiya/kinetics-600 | Mouwiya | 2024-05-25T20:51:15Z | 7 | 0 | tf-keras | [
"tf-keras",
"video-classification",
"en",
"dataset:ucf101",
"license:apache-2.0",
"model-index",
"region:us"
] | video-classification | 2024-05-25T20:43:56Z | ---
language: en
tags:
- video-classification
license: apache-2.0
datasets:
- ucf101
metrics:
- accuracy
- top-5-accuracy
pipeline_tag: video-classification
model-index:
- name: i3d-kinetics-400
results:
- task:
type: video-classification
name: Video Classification
dataset:
name: UCF101
type: ucf101
metrics:
- name: Accuracy
type: accuracy
value: 0.98
- name: Top-5 Accuracy
type: top-5-accuracy
value: 0.95
---
# I3D Kinetics-600
This model is a fine-tuned version of the Inflated 3D Convnet model for action recognition, trained on the Kinetics-400 dataset.
## Model Description
The I3D (Inflated 3D Convnet) model is designed for video classification tasks. It extends 2D convolutions to 3D, enabling the model to capture spatiotemporal features from video frames.
## Intended Uses
The model can be used for action recognition in videos. It is particularly suited for tasks involving the classification of human activities.
## Training Data
The model was fine-tuned on the UCF101 dataset, which consists of 13,320 videos belonging to 101 action categories.
## Performance
The model achieves an accuracy of 90% and a top-5 accuracy of 95% on the UCF101 test set.
## Example Usage
```python
from transformers import pipeline
model = pipeline("video-classification", model="Mouwiya/i3d-kinetics-600")
# Example video path
video_path = "path_to_your_video.mp4"
# Perform video classification
results = model(video_path)
print(results)
```
|
amanvvip2/finetuned-breast_cancer_images | amanvvip2 | 2024-05-25T20:47:42Z | 382 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"breast cancer image classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-13T19:08:25Z | ---
license: apache-2.0
tags:
- breast cancer image classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: finetuned-breast_cancer_images
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-breast_cancer_images
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the breast cancer-image-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- Accuracy: 0.9620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5507 | 2.5 | 100 | 0.3762 | 0.8861 |
| 0.4735 | 5.0 | 200 | 0.3380 | 0.8987 |
| 0.368 | 7.5 | 300 | 0.3424 | 0.8987 |
| 0.3182 | 10.0 | 400 | 0.2979 | 0.9082 |
| 0.2952 | 12.5 | 500 | 0.2110 | 0.9462 |
| 0.2493 | 15.0 | 600 | 0.1675 | 0.9620 |
| 0.2716 | 17.5 | 700 | 0.1705 | 0.9462 |
| 0.2866 | 20.0 | 800 | 0.1643 | 0.9620 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
cyr19/gptneo-1b-en-quatrain-conditioned | cyr19 | 2024-05-25T20:43:55Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T20:41:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lex-hue/Delexa-V0.2-7b | lex-hue | 2024-05-25T20:42:41Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T19:32:04Z | ---
license: apache-2.0
inference: true
---
|
huzunali/Phi-3-mini-128k-instruct-Q8_0-GGUF | huzunali | 2024-05-25T20:38:31Z | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-25T20:38:20Z | ---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# huzunali/Phi-3-mini-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo huzunali/Phi-3-mini-128k-instruct-Q8_0-GGUF --model phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo huzunali/Phi-3-mini-128k-instruct-Q8_0-GGUF --model phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m phi-3-mini-128k-instruct-q8_0.gguf -n 128
```
|
andricValdez/multilingual-e5-large-finetuned-autext24 | andricValdez | 2024-05-25T20:35:24Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-25T05:31:01Z | ---
license: mit
base_model: intfloat/multilingual-e5-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: multilingual-e5-large-finetuned-autext24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-e5-large-finetuned-autext24
This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2096
- Accuracy: 0.9673
- F1: 0.9673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 4798 | 0.1903 | 0.9527 | 0.9526 |
| 0.1396 | 2.0 | 9596 | 0.1751 | 0.9672 | 0.9672 |
| 0.1396 | 3.0 | 14394 | 0.2093 | 0.9647 | 0.9646 |
| 0.0391 | 4.0 | 19192 | 0.1954 | 0.9690 | 0.9690 |
| 0.0391 | 5.0 | 23990 | 0.2096 | 0.9673 | 0.9673 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
JawadC/maroilles | JawadC | 2024-05-25T20:21:32Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-25T19:55:07Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MAROLLES cheese
widget:
- text: A rustic wooden table with a block of Marolles cheese and a few grapes.
output:
url: image_0.png
- text: A rustic wooden table with a block of Marolles cheese and a few grapes.
output:
url: image_1.png
- text: A rustic wooden table with a block of Marolles cheese and a few grapes.
output:
url: image_2.png
- text: A rustic wooden table with a block of Marolles cheese and a few grapes.
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - JawadC/maroilles
<Gallery />
## Model description
These are JawadC/maroilles LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MAROLLES cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JawadC/maroilles/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ihebMissaoui/layoutlmv3-fine-tuned-funsd-kie | ihebMissaoui | 2024-05-25T20:15:34Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-25T20:15:13Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/elyadata/Ft%20layoutlmv3%20funsd%20max%20epochs%20100%20%2Cearlystop%3D4%2Cbatch%3D2%2Clr%3D1e-5%20adamw%2CFULL%20models%20params/runs/6zlgssbd)
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8476
- Precision: 0.8955
- Recall: 0.9071
- F1: 0.9013
- Accuracy: 0.8691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.9234 | 0.6139 | 0.7526 | 0.6762 | 0.7416 |
| No log | 2.0 | 150 | 0.6101 | 0.7549 | 0.8341 | 0.7925 | 0.7940 |
| No log | 3.0 | 225 | 0.5135 | 0.8332 | 0.8882 | 0.8598 | 0.8091 |
| No log | 4.0 | 300 | 0.5467 | 0.8189 | 0.8624 | 0.8401 | 0.8109 |
| No log | 5.0 | 375 | 0.4879 | 0.8660 | 0.9051 | 0.8851 | 0.8504 |
| No log | 6.0 | 450 | 0.5352 | 0.8787 | 0.9180 | 0.8980 | 0.8480 |
| 0.5752 | 7.0 | 525 | 0.5900 | 0.8730 | 0.8982 | 0.8854 | 0.8343 |
| 0.5752 | 8.0 | 600 | 0.6014 | 0.8832 | 0.9016 | 0.8923 | 0.8506 |
| 0.5752 | 9.0 | 675 | 0.6173 | 0.8883 | 0.9126 | 0.9003 | 0.8538 |
| 0.5752 | 10.0 | 750 | 0.6278 | 0.8787 | 0.9141 | 0.8960 | 0.8571 |
| 0.5752 | 11.0 | 825 | 0.6573 | 0.8612 | 0.9155 | 0.8876 | 0.8326 |
| 0.5752 | 12.0 | 900 | 0.7333 | 0.8818 | 0.9006 | 0.8911 | 0.8387 |
| 0.5752 | 13.0 | 975 | 0.7489 | 0.8888 | 0.9136 | 0.9010 | 0.8502 |
| 0.1263 | 14.0 | 1050 | 0.7719 | 0.8908 | 0.8997 | 0.8952 | 0.8318 |
| 0.1263 | 15.0 | 1125 | 0.8295 | 0.8945 | 0.9101 | 0.9022 | 0.8438 |
| 0.1263 | 16.0 | 1200 | 0.8447 | 0.8798 | 0.9126 | 0.8959 | 0.8465 |
| 0.1263 | 17.0 | 1275 | 0.8359 | 0.9090 | 0.8932 | 0.9010 | 0.8486 |
| 0.1263 | 18.0 | 1350 | 0.8430 | 0.8966 | 0.9091 | 0.9028 | 0.8414 |
| 0.1263 | 19.0 | 1425 | 0.8179 | 0.8854 | 0.9021 | 0.8937 | 0.8400 |
| 0.0482 | 20.0 | 1500 | 0.8950 | 0.8968 | 0.8982 | 0.8975 | 0.8475 |
| 0.0482 | 21.0 | 1575 | 0.8790 | 0.9053 | 0.9121 | 0.9087 | 0.8565 |
| 0.0482 | 22.0 | 1650 | 0.7915 | 0.9056 | 0.9101 | 0.9078 | 0.8595 |
| 0.0482 | 23.0 | 1725 | 0.8760 | 0.8938 | 0.8952 | 0.8945 | 0.8504 |
| 0.0482 | 24.0 | 1800 | 0.8320 | 0.9113 | 0.9086 | 0.9100 | 0.8625 |
| 0.0482 | 25.0 | 1875 | 0.8880 | 0.9017 | 0.9021 | 0.9019 | 0.8538 |
| 0.0482 | 26.0 | 1950 | 0.8611 | 0.9083 | 0.9101 | 0.9092 | 0.8499 |
| 0.0163 | 27.0 | 2025 | 0.8747 | 0.9068 | 0.9086 | 0.9077 | 0.8600 |
| 0.0163 | 28.0 | 2100 | 0.8476 | 0.8955 | 0.9071 | 0.9013 | 0.8691 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Xiaolihai/flan-t5-large_MeDistill_28_rougeAve | Xiaolihai | 2024-05-25T20:13:43Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-25T13:33:03Z | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large_MeDistill_28_rougeAve
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large_MeDistill_28_rougeAve
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.19.1
- Tokenizers 0.15.2
|
xuliu15/Frisian_32r_LoRA_10h | xuliu15 | 2024-05-25T20:12:09Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_6_1",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2024-05-25T20:12:06Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- mozilla-foundation/common_voice_6_1
model-index:
- name: LoRA-Frisian-10h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA-Frisian-10h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.5243 | 1.0 | 935 | 0.6300 |
| 0.432 | 2.0 | 1870 | 0.5607 |
| 0.3525 | 3.0 | 2805 | 0.5192 |
| 0.2719 | 4.0 | 3740 | 0.4961 |
| 0.2251 | 5.0 | 4675 | 0.4779 |
| 0.1761 | 6.0 | 5610 | 0.4505 |
| 0.1344 | 7.0 | 6545 | 0.4435 |
| 0.1019 | 8.0 | 7480 | 0.4414 |
| 0.0717 | 9.0 | 8415 | 0.4197 |
| 0.0434 | 10.0 | 9350 | 0.4149 |
| 0.023 | 11.0 | 10285 | 0.4077 |
| 0.0161 | 12.0 | 11220 | 0.4067 |
| 0.0075 | 13.0 | 12155 | 0.3982 |
| 0.0066 | 14.0 | 13090 | 0.4005 |
| 0.0034 | 15.0 | 14025 | 0.4038 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
seandearnaley/llama3-8b-sentiment-may-22-2024-2epoches | seandearnaley | 2024-05-25T20:10:11Z | 145 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:quantized:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T00:42:22Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct
---
# Uploaded model
- **Developed by:** seandearnaley
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Developed to support [Elevating Sentiment Analysis @Medium](https://seandearnaley.medium.com/elevating-sentiment-analysis-ad02a316df1d).
example Ollama Modelfile:
```
FROM ./llama3-8b-sentiment-may-22-2024-2epoches-unsloth.Q4_K_M.gguf
SYSTEM """
You are an advanced AI assistant created to perform sentiment analysis on text. Your task is to carefully read the text and analyze the sentiment it expresses towards the potential future stock value of any company mentioned. Analyze the sentiment of this text and respond with the appropriate JSON:
"""
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
# PARAMETER stop <|end_of_text|> # Default for Llama3
# PARAMETER stop </s> # Default for Mistral
# A parameter that sets the temperature of the model, controlling how creative or conservative the model's responses will be
PARAMETER temperature 0.2
# Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)
PARAMETER repeat_last_n 256
```
|
Rupeshwaran/ppo-Huggy | Rupeshwaran | 2024-05-25T20:09:11Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-25T20:07:51Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rupeshwaran/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Reyyan/cs-llama3-8B-3 | Reyyan | 2024-05-25T20:09:02Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-25T20:07:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
afrideva/Llama-3-Yggdrasil-8B-GGUF | afrideva | 2024-05-25T20:02:23Z | 51 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"ggml",
"quantized",
"text-generation",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Locutusque/Llama-3-Yggdrasil-8B",
"base_model:quantized:Locutusque/Llama-3-Yggdrasil-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T19:03:35Z | ---
base_model: Locutusque/Llama-3-Yggdrasil-8B
inference: true
library_name: transformers
license: llama3
model_creator: Locutusque
model_name: Llama-3-Yggdrasil-8B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- mergekit
- merge
- gguf
- ggml
- quantized
---
# Llama-3-Yggdrasil-8B-GGUF
Quantized GGUF model files for [Llama-3-Yggdrasil-8B](https://huggingface.co/Locutusque/Llama-3-Yggdrasil-8B) from [Locutusque](https://huggingface.co/Locutusque)
## Original Model Card:
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [Locutusque/Llama-3-Hercules-5.0-8B](https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B)
* [Locutusque/llama-3-neural-chat-v2.2-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.6
weight: 0.55
- model: Locutusque/llama-3-neural-chat-v2.2-8b
parameters:
density: 0.55
weight: 0.45
- model: Locutusque/Llama-3-Hercules-5.0-8B
parameters:
density: 0.57
weight: 0.5
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
``` |
zahidpichen/Fine-Tuned-LLM-model | zahidpichen | 2024-05-25T20:01:50Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gpt_neox",
"arxiv:1910.09700",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-25T19:57:52Z | ---
library_name: peft
base_model: databricks/dolly-v2-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
theo77186/dolphin-2.9.1-mistral-22b | theo77186 | 2024-05-25T19:58:04Z | 55 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T09:12:42Z | ---
license: apache-2.0
---
[cognitivecomputations/dolphin-2.9.1-mixtral-1x22b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b) converted to Mistral format. A Mixtral model with a single expert is mathematically equivalent to the corresponding Mistral model. This allows to remove 344k parameters and to avoid software bugs when encountering a Mixtral with 1 expert.
Note that ChatML is entirely broken in the original and converted model. I have no plausible explanation why it's broken. Alpaca seems to work even though the model is not trained on it.
Original model card below.
---
# Dolphin 2.9.1 Mixtral 1x22b 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed.
The base model has 64k context, and the full-weight fine-tuning was with 16k sequence length.
It took 27 hours on 8xH100 provided by Crusoe Cloud.
This model was fully fine-tuned, targeting all layers.
The model is an extracted expert using SLERP and a custom script that we've open-sourced. It extracts a single expert which is the combined SLERP of all 8 experts from a Mixtral architecture. We decided to not fully convert to a dense model, for the sake of trying to keep as much of the original model's performance as possible, as this process is already quite surgical and there are a lot of variables to take into account.
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed under Apache 2.0. We grant permission for any use, including commercial, as long as it complies with the Apache-2.0 license. Dolphin was trained using data generated from GPT-4, among other models. For more details on the extraction process of the expert model, visit our GitHub repository: https://github.com/cognitivecomputations/extract-expert/tree/main
## Evals

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: cognitivecomputations/mixtral-1x22b-base
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# trust_remote_code: true
# load_in_8bit: true
# load_in_4bit: true
# strict: false
datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
chat_template: chatml
dataset_prepared_path: yi34b-prepared
val_set_size: 0.01
output_dir: ./1x22b-out
# adapter: qlora
# lora_r: 16
# lora_alpha: 16
# lora_modules_to_save: [embed_tokens, lm_head]
# lora_dropout: 0.05
# lora_target_linear: true
# unfrozen_parameters:
# - ^lm_head.weight$
# - ^model.embed_tokens.weight$
# # input_layernorm layers
# - model.layers.0.input_layernorm
# - model.layers.1.input_layernorm
# - model.layers.2.input_layernorm
# - model.layers.3.input_layernorm
# - model.layers.4.input_layernorm
# - model.layers.5.input_layernorm
# - model.layers.6.input_layernorm
# - model.layers.7.input_layernorm
# - model.layers.8.input_layernorm
# - model.layers.9.input_layernorm
# - model.layers.10.input_layernorm
# - model.layers.11.input_layernorm
# - model.layers.12.input_layernorm
# - model.layers.13.input_layernorm
# - model.layers.14.input_layernorm
# - model.layers.15.input_layernorm
# - model.layers.16.input_layernorm
# - model.layers.17.input_layernorm
# - model.layers.18.input_layernorm
# - model.layers.19.input_layernorm
# - model.layers.20.input_layernorm
# - model.layers.21.input_layernorm
# - model.layers.22.input_layernorm
# - model.layers.23.input_layernorm
# # lm_head layers
# # mlp.down_proj layers
# - model.layers.17.mlp.down_proj
# - model.layers.18.mlp.down_proj
# - model.layers.19.mlp.down_proj
# - model.layers.20.mlp.down_proj
# - model.layers.21.mlp.down_proj
# - model.layers.22.mlp.down_proj
# - model.layers.23.mlp.down_proj
# - model.layers.24.mlp.down_proj
# - model.layers.25.mlp.down_proj
# - model.layers.26.mlp.down_proj
# - model.layers.27.mlp.down_proj
# - model.layers.28.mlp.down_proj
# - model.layers.29.mlp.down_proj
# - model.layers.30.mlp.down_proj
# - model.layers.31.mlp.down_proj
# - model.layers.32.mlp.down_proj
# - model.layers.33.mlp.down_proj
# - model.layers.34.mlp.down_proj
# - model.layers.35.mlp.down_proj
# - model.layers.36.mlp.down_proj
# - model.layers.37.mlp.down_proj
# - model.layers.38.mlp.down_proj
# - model.layers.39.mlp.down_proj
# - model.layers.40.mlp.down_proj
# # mlp.gate_proj layers
# - model.layers.51.mlp.gate_proj
# - model.layers.50.mlp.gate_proj
# - model.layers.53.mlp.gate_proj
# - model.layers.52.mlp.gate_proj
# - model.layers.49.mlp.gate_proj
# - model.layers.45.mlp.gate_proj
# - model.layers.46.mlp.gate_proj
# - model.layers.47.mlp.gate_proj
# - model.layers.57.mlp.gate_proj
# - model.layers.48.mlp.gate_proj
# - model.layers.56.mlp.gate_proj
# - model.layers.41.mlp.gate_proj
# - model.layers.54.mlp.gate_proj
# - model.layers.43.mlp.gate_proj
# - model.layers.44.mlp.gate_proj
# - model.layers.60.mlp.gate_proj
# - model.layers.55.mlp.gate_proj
# - model.layers.40.mlp.gate_proj
# - model.layers.42.mlp.gate_proj
# - model.layers.58.mlp.gate_proj
# - model.layers.36.mlp.gate_proj
# - model.layers.37.mlp.gate_proj
# - model.layers.38.mlp.gate_proj
# - model.layers.39.mlp.gate_proj
# # mlp.up_proj layers
# - model.layers.50.mlp.up_proj
# - model.layers.51.mlp.up_proj
# - model.layers.41.mlp.up_proj
# - model.layers.49.mlp.up_proj
# - model.layers.43.mlp.up_proj
# - model.layers.44.mlp.up_proj
# - model.layers.40.mlp.up_proj
# - model.layers.45.mlp.up_proj
# - model.layers.47.mlp.up_proj
# - model.layers.48.mlp.up_proj
# - model.layers.46.mlp.up_proj
# - model.layers.42.mlp.up_proj
# - model.layers.39.mlp.up_proj
# - model.layers.36.mlp.up_proj
# - model.layers.37.mlp.up_proj
# - model.layers.38.mlp.up_proj
# - model.layers.56.mlp.up_proj
# - model.layers.57.mlp.up_proj
# - model.layers.53.mlp.up_proj
# - model.layers.31.mlp.up_proj
# - model.layers.32.mlp.up_proj
# - model.layers.34.mlp.up_proj
# - model.layers.35.mlp.up_proj
# - model.layers.33.mlp.up_proj
# # model.embed_tokens layers
# # model.norm layers
# # post_attention_layernorm layers
# - model.layers.0.post_attention_layernorm
# - model.layers.1.post_attention_layernorm
# - model.layers.2.post_attention_layernorm
# - model.layers.3.post_attention_layernorm
# - model.layers.4.post_attention_layernorm
# - model.layers.5.post_attention_layernorm
# - model.layers.6.post_attention_layernorm
# - model.layers.7.post_attention_layernorm
# - model.layers.8.post_attention_layernorm
# - model.layers.9.post_attention_layernorm
# - model.layers.10.post_attention_layernorm
# - model.layers.11.post_attention_layernorm
# - model.layers.12.post_attention_layernorm
# - model.layers.13.post_attention_layernorm
# - model.layers.14.post_attention_layernorm
# - model.layers.15.post_attention_layernorm
# - model.layers.16.post_attention_layernorm
# - model.layers.17.post_attention_layernorm
# - model.layers.18.post_attention_layernorm
# - model.layers.19.post_attention_layernorm
# - model.layers.20.post_attention_layernorm
# - model.layers.21.post_attention_layernorm
# - model.layers.22.post_attention_layernorm
# - model.layers.23.post_attention_layernorm
# # self_attn.k_proj layers
# - model.layers.42.self_attn.k_proj
# - model.layers.41.self_attn.k_proj
# - model.layers.39.self_attn.k_proj
# - model.layers.35.self_attn.k_proj
# - model.layers.28.self_attn.k_proj
# - model.layers.79.self_attn.k_proj
# - model.layers.43.self_attn.k_proj
# - model.layers.32.self_attn.k_proj
# - model.layers.73.self_attn.k_proj
# - model.layers.31.self_attn.k_proj
# - model.layers.29.self_attn.k_proj
# - model.layers.76.self_attn.k_proj
# - model.layers.30.self_attn.k_proj
# - model.layers.40.self_attn.k_proj
# - model.layers.33.self_attn.k_proj
# - model.layers.78.self_attn.k_proj
# - model.layers.34.self_attn.k_proj
# - model.layers.37.self_attn.k_proj
# - model.layers.45.self_attn.k_proj
# - model.layers.44.self_attn.k_proj
# - model.layers.71.self_attn.k_proj
# - model.layers.26.self_attn.k_proj
# - model.layers.74.self_attn.k_proj
# - model.layers.27.self_attn.k_proj
# # self_attn.o_proj layers
# - model.layers.35.self_attn.o_proj
# - model.layers.34.self_attn.o_proj
# - model.layers.37.self_attn.o_proj
# - model.layers.33.self_attn.o_proj
# - model.layers.31.self_attn.o_proj
# - model.layers.27.self_attn.o_proj
# - model.layers.38.self_attn.o_proj
# - model.layers.24.self_attn.o_proj
# - model.layers.39.self_attn.o_proj
# - model.layers.43.self_attn.o_proj
# - model.layers.29.self_attn.o_proj
# - model.layers.0.self_attn.o_proj
# - model.layers.50.self_attn.o_proj
# - model.layers.32.self_attn.o_proj
# - model.layers.45.self_attn.o_proj
# - model.layers.30.self_attn.o_proj
# - model.layers.60.self_attn.o_proj
# - model.layers.23.self_attn.o_proj
# - model.layers.18.self_attn.o_proj
# - model.layers.67.self_attn.o_proj
# - model.layers.57.self_attn.o_proj
# - model.layers.20.self_attn.o_proj
# - model.layers.76.self_attn.o_proj
# - model.layers.28.self_attn.o_proj
# # self_attn.q_proj layers
# - model.layers.1.self_attn.q_proj
# - model.layers.6.self_attn.q_proj
# - model.layers.0.self_attn.q_proj
# - model.layers.5.self_attn.q_proj
# - model.layers.2.self_attn.q_proj
# - model.layers.7.self_attn.q_proj
# - model.layers.3.self_attn.q_proj
# - model.layers.4.self_attn.q_proj
# - model.layers.8.self_attn.q_proj
# - model.layers.9.self_attn.q_proj
# - model.layers.61.self_attn.q_proj
# - model.layers.10.self_attn.q_proj
# - model.layers.62.self_attn.q_proj
# - model.layers.36.self_attn.q_proj
# - model.layers.15.self_attn.q_proj
# - model.layers.11.self_attn.q_proj
# - model.layers.17.self_attn.q_proj
# - model.layers.60.self_attn.q_proj
# - model.layers.63.self_attn.q_proj
# - model.layers.64.self_attn.q_proj
# - model.layers.29.self_attn.q_proj
# - model.layers.30.self_attn.q_proj
# - model.layers.55.self_attn.q_proj
# - model.layers.34.self_attn.q_proj
# # self_attn.v_proj layers
# - model.layers.12.self_attn.v_proj
# - model.layers.16.self_attn.v_proj
# - model.layers.18.self_attn.v_proj
# - model.layers.19.self_attn.v_proj
# - model.layers.20.self_attn.v_proj
# - model.layers.21.self_attn.v_proj
# - model.layers.22.self_attn.v_proj
# - model.layers.23.self_attn.v_proj
# - model.layers.24.self_attn.v_proj
# - model.layers.25.self_attn.v_proj
# - model.layers.26.self_attn.v_proj
# - model.layers.27.self_attn.v_proj
# - model.layers.28.self_attn.v_proj
# - model.layers.29.self_attn.v_proj
# - model.layers.30.self_attn.v_proj
# - model.layers.31.self_attn.v_proj
# - model.layers.32.self_attn.v_proj
# - model.layers.33.self_attn.v_proj
# - model.layers.34.self_attn.v_proj
# - model.layers.35.self_attn.v_proj
# - model.layers.36.self_attn.v_proj
# - model.layers.37.self_attn.v_proj
# - model.layers.38.self_attn.v_proj
# - model.layers.39.self_attn.v_proj
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# lora_fan_in_fan_out:
wandb_project: dolphin-mixtral1x22b
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint: /workspace/axolotl2/axolotl/1x22b-out/checkpoint-507
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 4
save_total_limit: 2
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
bos_token: "<s>"
# pad_token: "<unk>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
```
</details><br>
# 1x22b-out
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9818 | 0.0015 | 1 | 0.9854 |
| 0.4783 | 0.2499 | 169 | 0.5042 |
| 0.464 | 0.4997 | 338 | 0.4755 |
| 0.4561 | 0.7496 | 507 | 0.4593 |
| 0.3981 | 0.9994 | 676 | 0.4553 |
| 0.3725 | 1.2378 | 845 | 0.4525 |
| 0.3624 | 1.4877 | 1014 | 0.4457 |
| 0.359 | 1.7376 | 1183 | 0.4393 |
| 0.375 | 1.9874 | 1352 | 0.4345 |
| 0.2899 | 2.2260 | 1521 | 0.4488 |
| 0.2848 | 2.4759 | 1690 | 0.4473 |
| 0.2935 | 2.7257 | 1859 | 0.4470 |
| 0.2065 | 2.9756 | 2028 | 0.4572 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
iisking/test_2 | iisking | 2024-05-25T19:55:15Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-25T19:51:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jdqwoi/TooManyMixRolePlay-7B-Story_V1 | jdqwoi | 2024-05-25T19:53:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jdqwoi/TooManyMixRolePlay-7B-Story",
"jdqwoi/02",
"base_model:jdqwoi/02",
"base_model:merge:jdqwoi/02",
"base_model:jdqwoi/TooManyMixRolePlay-7B-Story",
"base_model:merge:jdqwoi/TooManyMixRolePlay-7B-Story",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T21:36:44Z | ---
tags:
- merge
- mergekit
- lazymergekit
- jdqwoi/TooManyMixRolePlay-7B-Story
- jdqwoi/02
base_model:
- jdqwoi/TooManyMixRolePlay-7B-Story
- jdqwoi/02
---
# TooManyMixRolePlay-7B-Story_V1
TooManyMixRolePlay-7B-Story_V1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jdqwoi/TooManyMixRolePlay-7B-Story](https://huggingface.co/jdqwoi/TooManyMixRolePlay-7B-Story)
* [jdqwoi/02](https://huggingface.co/jdqwoi/02)
# EXL2 quants of jdqwoi/TooManyMixRolePlay-7B-Story_V1 by [kim512](https://huggingface.co/kim512)
* [4.00 bits per weight](https://huggingface.co/kim512/TooManyMixRolePlay-7B-Story_V1-4.0bpw-exl2)
* [5.00 bits per weight](https://huggingface.co/kim512/TooManyMixRolePlay-7B-Story_V1-5.0bpw-exl2)
* [6.00 bits per weight](https://huggingface.co/kim512/TooManyMixRolePlay-7B-Story_V1-6.0bpw-exl2)
* [7.00 bits per weight](https://huggingface.co/kim512/TooManyMixRolePlay-7B-Story_V1-7.0bpw-exl2)
* [8.00 bits per weight](https://huggingface.co/kim512/TooManyMixRolePlay-7B-Story_V1-8.0bpw-exl2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jdqwoi/TooManyMixRolePlay-7B-Story
layer_range: [0, 32]
- model: jdqwoi/02
layer_range: [0, 32]
merge_method: slerp
base_model: jdqwoi/TooManyMixRolePlay-7B-Story
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jdqwoi/TooManyMixRolePlay-7B-Story_V1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf | RichardErkhov | 2024-05-25T19:48:25Z | 35 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T16:50:42Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b - GGUF
- Model creator: https://huggingface.co/S4sch/
- Original model: https://huggingface.co/S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf) | Q2_K | 3.95GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_XS.gguf) | IQ3_XS | 4.39GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_S.gguf) | IQ3_S | 4.63GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf) | Q3_K_S | 4.61GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_M.gguf) | IQ3_M | 4.78GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K.gguf) | Q3_K | 5.13GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf) | Q3_K_M | 5.13GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf) | Q3_K_L | 5.58GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_XS.gguf) | IQ4_XS | 5.75GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf) | Q4_0 | 6.0GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_NL.gguf) | IQ4_NL | 6.06GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf) | Q4_K_S | 6.04GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K.gguf) | Q4_K | 6.38GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf) | Q4_K_M | 6.38GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_1.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_1.gguf) | Q4_1 | 6.65GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf) | Q5_0 | 7.31GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf) | Q5_K_S | 7.31GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K.gguf) | Q5_K | 7.5GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf) | Q5_K_M | 7.5GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_1.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_1.gguf) | Q5_1 | 7.96GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf) | Q6_K | 8.7GB |
| [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf) | Q8_0 | 11.27GB |
Original model description:
---
license: apache-2.0
---
Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1
GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF
Merge with the following conditions
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 8]
- model: Intel/neural-chat-7b-v3-1
layer_range: [4, 12]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [9, 16]
- model: Intel/neural-chat-7b-v3-1
layer_range: [13, 20]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [17, 24]
- model: Intel/neural-chat-7b-v3-1
layer_range: [21, 28]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [25, 32]
merge_method: passthrough
Benchmarks are coming soon...
|
cxx5208/NER_finetuned | cxx5208 | 2024-05-25T19:48:11Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-25T04:53:10Z | # DistilBERT Fine-Tuned for Named Entity Recognition (NER)



This repository contains a DistilBERT model fine-tuned for Named Entity Recognition (NER). The model has been trained to identify and classify named entities such as names of people, places, organizations, and dates in text.
## Model Details
- **Model:** [DistilBERT](https://huggingface.co/distilbert-base-cased)
- **Task:** Named Entity Recognition (NER)
- **Training Dataset:** Custom dataset
- **Evaluation Metrics:** Precision, Recall, F1-Score, Accuracy
## Usage
You can use this model with the Hugging Face `transformers` library to perform NER on your text data. Below are examples of how to use the model and tokenizer.
### Installation
First, make sure you have the `transformers` library installed:
```bash
pip install transformers
```
### Load the Model
```python
from transformers import pipeline
# Load the model and tokenizer
token_classifier = pipeline(
"token-classification",
model="cxx5208/NER_finetuned",
tokenizer="cxx5208/NER_finetuned",
aggregation_strategy="simple"
)
# Example text
text = "My name is Yeshvanth Raju Kurapati. I study at San Jose State University"
# Perform NER
entities = token_classifier(text)
print(entities)
```
### Example Output
```python
[
{'entity_group': 'PER',
'score': 0.99808735,
'word': 'Yeshvanth Raju Kurapati',
'start': 11,
'end': 34},
{'entity_group': 'ORG',
'score': 0.9923826,
'word': 'San Jose State University',
'start': 47,
'end': 72}
]
```
## Training Details
The model was fine-tuned using the following hyperparameters:
- **Batch Size:** 16
- **Learning Rate:** 5e-5
- **Epochs:** 3
- **Optimizer:** AdamW
The training process involved using a standard NER dataset (e.g., CoNLL-2003) and included steps for tokenization, data preprocessing, and evaluation.
## Evaluation
The model was evaluated using precision, recall, F1-score, and accuracy metrics. The performance metrics are as follows:
- **Precision:** 0.952
- **Recall:** 0.948
- **F1-Score:** 0.950
- **Accuracy:** 0.975
## About DistilBERT
DistilBERT is a smaller, faster, cheaper version of BERT developed by Hugging Face. It retains 97% of BERT’s language understanding capabilities while being 60% faster and 40% smaller.
## License
This model is released under the [MIT License](LICENSE).
## Acknowledgements
- Hugging Face for the [transformers](https://github.com/huggingface/transformers) library and DistilBERT model.
- The authors of the original dataset used for training.
|
Arbi-Houssem/speecht5_ar_tn_1.1 | Arbi-Houssem | 2024-05-25T19:46:15Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"arr",
"dataset:Arbi-Houssem/datasetSTT-TTS",
"base_model:MBZUAI/speecht5_tts_clartts_ar",
"base_model:finetune:MBZUAI/speecht5_tts_clartts_ar",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-05-25T17:30:54Z | ---
language:
- arr
license: mit
base_model: MBZUAI/speecht5_tts_clartts_ar
tags:
- generated_from_trainer
datasets:
- Arbi-Houssem/datasetSTT-TTS
model-index:
- name: SpeechT5 TTS Tunisien
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Tunisien
This model is a fine-tuned version of [MBZUAI/speecht5_tts_clartts_ar](https://huggingface.co/MBZUAI/speecht5_tts_clartts_ar) on the datasetSTT-TTS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.544 | 14.2857 | 200 | 0.6401 |
| 0.4869 | 28.5714 | 400 | 0.5907 |
| 0.4384 | 42.8571 | 600 | 0.5684 |
| 0.4069 | 57.1429 | 800 | 0.5577 |
| 0.3992 | 71.4286 | 1000 | 0.5464 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
lmbelo/OpenELM-270M-Function-Calling | lmbelo | 2024-05-25T19:40:41Z | 13 | 0 | mlx | [
"mlx",
"safetensors",
"openelm",
"custom_code",
"license:other",
"region:us"
] | null | 2024-05-25T19:39:27Z | ---
license: other
tags:
- mlx
- mlx
license_name: apple-sample-code-license
license_link: LICENSE
---
# lmbelo/OpenELM-270M-Function-Calling
The Model [lmbelo/OpenELM-270M-Function-Calling](https://huggingface.co/lmbelo/OpenELM-270M-Function-Calling) was converted to MLX format from [lmbelo/OpenELM-270M-Instruct](https://huggingface.co/lmbelo/OpenELM-270M-Instruct) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("lmbelo/OpenELM-270M-Function-Calling")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
hgnoi/QN1aO8xC5BYkXdLR | hgnoi | 2024-05-25T19:40:01Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T19:37:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pawankumarr009/ppo-LunarLander-v2 | Pawankumarr009 | 2024-05-25T19:38:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-25T19:38:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.64 +/- 13.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mayflowergmbh/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF | mayflowergmbh | 2024-05-25T19:33:43Z | 55 | 2 | transformers | [
"transformers",
"gguf",
"de",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T19:15:31Z | ---
language:
- de
license: llama3
library_name: transformers
tags:
- gguf
---
# # Llama3-DiscoLeo-Instruct 8B 32k-context (version 0.1)
## Thanks and Accreditation
[DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729)
is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot)
with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai).
Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer.
## Model Overview
DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1 is an instruction tuned version of our long-context [Llama3-German-8B-32k](https://huggingface.co/DiscoResearch/Llama3_German_8B_32k).
The base model was derived from [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) through continuous pretraining on 65 billion high-quality German tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
For the long-context version we trained on an additional 100 million tokens at 32k context length, using a rope_theta value of 1.5e6 and a learning rate of 1.5e-5 with a batch size of 256*8192 and otherwise equal hyperparameters to the base model.
We finetuned this checkpoint on the German Instruction dataset from DiscoResearch created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)).
## How to use
Llama3_DiscoLeo_Instruct_8B_32k_v0.1 uses the [Llama-3 chat template](https://github.com/meta-llama/llama3?tab=readme-ov-file#instruction-tuned-models), which can be easily used with [transformer's chat templating](https://huggingface.co/docs/transformers/main/en/chat_templating).
See [below](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1#usage-example) for a usage example.
## Model Training and Hyperparameters
The model was full-fintuned with axolotl on the [hessian.Ai 42](hessian.ai) with 32,768 context-length, learning rate 2e-5 and batch size of 16.
## Evaluation and Results
We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark).
In the below image and corresponding table, you can see the benchmark scores for the different instruct models compared to Metas instruct version. All checkpoints are available in this [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729).

| Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean |
|----------------------------------------------------|----------------|---------------|---------------|------------------|-------------|--------------|-------------|-------------|-------------|
| meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025** | 0.60008 | **0.66658** | 0.53541 | 0.57656 |
| DiscoResearch/Llama3-German-8B | 0.49499 | 0.44838 | 0.55802 | 0.49829 | 0.79924 | 0.65395 | 0.62240 | 0.54413 | 0.57743 |
| DiscoResearch/Llama3-German-8B-32k | 0.48920 | 0.45138 | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774 | 0.47971 | 0.55982 |
| DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1 | **0.53042** | 0.52867 | 0.59556 | **0.53839** | 0.80721 | 0.66440 | 0.61898 | 0.56053 | **0.60552** |
| **DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1** | 0.52749 | **0.53245** | 0.58788 | 0.53754 | 0.80770 | **0.66709** | 0.62123 | **0.56238** | 0.60547 |
## Model Configurations
We release DiscoLeo-8B in the following configurations:
1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3-German_8B)
2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3_German_8B_32k)
3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1)
4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1) (This model)
5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental)
6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42)
## Usage Example
Here's how to use the model with transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1")
prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft"
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Acknowledgements
The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration.
The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D). |
huynq3Cyradar/bert-large-finetuned-phishing-webpage-cleaned-version | huynq3Cyradar | 2024-05-25T19:31:54Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-25T09:33:43Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-large-finetuned-phishing-webpage-cleaned-version
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-finetuned-phishing-webpage-cleaned-version
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0324
- Accuracy: 0.9911
- Precision: 0.9931
- Recall: 0.9883
- False Positive Rate: 0.0063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | False Positive Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-------------------:|
| 0.0931 | 1.0 | 562 | 0.0409 | 0.9861 | 0.9948 | 0.9762 | 0.0047 |
| 0.0345 | 2.0 | 1124 | 0.0348 | 0.9900 | 0.9918 | 0.9874 | 0.0075 |
| 0.0224 | 3.0 | 1687 | 0.0324 | 0.9911 | 0.9931 | 0.9883 | 0.0063 |
| 0.0156 | 4.0 | 2248 | 0.0509 | 0.9913 | 0.9914 | 0.9904 | 0.0079 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
EternalRecursion/llm_clone_llama | EternalRecursion | 2024-05-25T19:31:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-28T19:57:04Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** EternalRecursion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits