modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vhs01/mistral-7b-dolly | vhs01 | 2024-05-24T06:36:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:02:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jayasuryajsk/phi-3-1B | jayasuryajsk | 2024-05-24T06:36:22Z | 279 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T06:34:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zgh36/HRoom | zgh36 | 2024-05-24T06:27:53Z | 64 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-24T06:04:29Z | ---
license: mit
base_model: camembert-base
tags:
- generated_from_keras_callback
model-index:
- name: HRoom
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HRoom
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.41.0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
VaibhavSethia07/lora_model | VaibhavSethia07 | 2024-05-24T06:26:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-16T09:08:55Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** VaibhavSethia07
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AgeNtX071/EISproject | AgeNtX071 | 2024-05-24T06:26:07Z | 0 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Rocketknight1/falcon-rw-1b",
"base_model:adapter:Rocketknight1/falcon-rw-1b",
"region:us"
] | null | 2024-05-24T05:48:29Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: Rocketknight1/falcon-rw-1b
model-index:
- name: EISproject
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EISproject
This model is a fine-tuned version of [Rocketknight1/falcon-rw-1b](https://huggingface.co/Rocketknight1/falcon-rw-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1 |
jiahuimbzuai/llava-v1.5-13b-vlguard-10k3e-lora | jiahuimbzuai | 2024-05-24T06:25:49Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llava_llama",
"arxiv:1910.09700",
"base_model:liuhaotian/llava-v1.5-13b",
"base_model:adapter:liuhaotian/llava-v1.5-13b",
"region:us"
] | null | 2024-05-24T06:25:07Z | ---
library_name: peft
base_model: liuhaotian/llava-v1.5-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
jiahuimbzuai/llava-v1.5-7b-vlguard-10k3e-lora | jiahuimbzuai | 2024-05-24T06:23:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llava_llama",
"arxiv:1910.09700",
"base_model:liuhaotian/llava-v1.5-7b",
"base_model:adapter:liuhaotian/llava-v1.5-7b",
"region:us"
] | null | 2024-05-24T06:23:14Z | ---
library_name: peft
base_model: liuhaotian/llava-v1.5-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Amankankriya/ppo-Walker2d-v4 | Amankankriya | 2024-05-24T06:22:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T06:22:32Z | ---
library_name: stable-baselines3
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 4275.64 +/- 20.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **Walker2d-v4**
This is a trained model of a **PPO** agent playing **Walker2d-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
seongs/maymust_mistral_v1.2 | seongs | 2024-05-24T06:19:33Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T05:35:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/yVI8CyzLP2PQEdVp | hgnoi | 2024-05-24T06:19:20Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T06:17:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ghost613/gemma7_on_korean_conv | ghost613 | 2024-05-24T06:17:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:beomi/gemma-ko-7b",
"base_model:adapter:beomi/gemma-ko-7b",
"license:other",
"region:us"
] | null | 2024-05-24T06:17:16Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: beomi/gemma-ko-7b
model-index:
- name: gemma7_on_korean_conv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma7_on_korean_conv
This model is a fine-tuned version of [beomi/gemma-ko-7b](https://huggingface.co/beomi/gemma-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 7200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8472 | 0.1281 | 200 | 0.8849 |
| 0.7799 | 0.2563 | 400 | 0.8185 |
| 0.7791 | 0.3844 | 600 | 0.7945 |
| 0.7505 | 0.5126 | 800 | 0.7877 |
| 0.7891 | 0.6407 | 1000 | 0.7616 |
| 0.689 | 0.7688 | 1200 | 0.7437 |
| 0.7612 | 0.8970 | 1400 | 0.7520 |
| 0.5183 | 1.0251 | 1600 | 0.8028 |
| 0.4562 | 1.1533 | 1800 | 0.7811 |
| 0.4584 | 1.2814 | 2000 | 0.7920 |
| 0.4535 | 1.4095 | 2200 | 0.7887 |
| 0.4268 | 1.5377 | 2400 | 0.8048 |
| 0.4368 | 1.6658 | 2600 | 0.7640 |
| 0.4435 | 1.7940 | 2800 | 0.7844 |
| 0.4327 | 1.9221 | 3000 | 0.7977 |
| 0.1711 | 2.0502 | 3200 | 1.0313 |
| 0.1856 | 2.1784 | 3400 | 0.9997 |
| 0.1812 | 2.3065 | 3600 | 0.9870 |
| 0.1876 | 2.4346 | 3800 | 0.9731 |
| 0.1927 | 2.5628 | 4000 | 0.9857 |
| 0.1964 | 2.6909 | 4200 | 1.0148 |
| 0.1948 | 2.8191 | 4400 | 1.0025 |
| 0.1865 | 2.9472 | 4600 | 1.0556 |
| 0.059 | 3.0753 | 4800 | 1.3127 |
| 0.0523 | 3.2035 | 5000 | 1.3947 |
| 0.0658 | 3.3316 | 5200 | 1.3980 |
| 0.0596 | 3.4598 | 5400 | 1.3785 |
| 0.0556 | 3.5879 | 5600 | 1.3936 |
| 0.0709 | 3.7160 | 5800 | 1.3858 |
| 0.0544 | 3.8442 | 6000 | 1.3943 |
| 0.0503 | 3.9723 | 6200 | 1.4319 |
| 0.0133 | 4.1005 | 6400 | 1.6485 |
| 0.0144 | 4.2286 | 6600 | 1.6932 |
| 0.0126 | 4.3567 | 6800 | 1.6980 |
| 0.0189 | 4.4849 | 7000 | 1.6962 |
| 0.0128 | 4.6130 | 7200 | 1.7037 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
AlikS/LunarLander-v2 | AlikS | 2024-05-24T06:14:03Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T06:00:22Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -9.61 +/- 72.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'f': '/root/.local/share/jupyter/runtime/kernel-9e92468f-89d2-40b3-805a-092b95556d32.json'
'exp_name': 'ppo-LunarLander'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 200000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'AlikS/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
sanqiang/zephyr-7b-gemma-dpo | sanqiang | 2024-05-24T06:08:14Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/dpo-mix-7k",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:finetune:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T06:34:11Z | ---
license: other
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- argilla/dpo-mix-7k
model-index:
- name: zephyr-7b-gemma-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gemma-dpo
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
yrdu/mogu-safer-vicuna-7B | yrdu | 2024-05-24T06:04:45Z | 0 | 0 | null | [
"safetensors",
"arxiv:2405.14488",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T10:24:21Z | ---
license: apache-2.0
---
#### Paper
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability [ https://arxiv.org/abs/2405.14488 ]
Github: [ https://github.com/DYR1/MoGU/ ]
#### Introduction
We open-source the safer vicuna-7b-v1.5 [ https://huggingface.co/lmsys/vicuna-7b-v1.5 ], trained with our proposed MoGU framework.
Technical details of MoGU can be found in the paper ''MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability''.
We release the parameters and the inference code.
The current inference code is still in a simple version and we will further improve it.
In the future, we plan to open-source the training data and the training code.
You can use our open-sourced LLM by following the steps below.
#### Configuration Environment
```
pip install -r requirement.txt
```
#### Inference Stage
```
python inference.py
```
Some test examples are provided in inference.py.
|
yrdu/mogu-safer-llama2-7B | yrdu | 2024-05-24T06:04:15Z | 0 | 0 | null | [
"safetensors",
"arxiv:2405.14488",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T07:31:16Z | ---
license: apache-2.0
---
#### Paper
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability [ https://arxiv.org/abs/2405.14488 ]
Github: [ https://github.com/DYR1/MoGU/ ]
#### Introduction
We open-source the safer Llama-2-7b-chat-hf [ https://huggingface.co/meta-llama/Llama-2-7b-chat-hf ], trained with our proposed MoGU framework.
Technical details of MoGU can be found in the paper ''MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability''.
We release the parameters and the inference code.
The current inference code is still in a simple version and we will further improve it.
In the future, we plan to open-source the training data and the training code.
You can use our open-sourced LLM by following the steps below.
##### Configuration Environment
```
pip install -r requirement.txt
```
##### Inference Stage
```
python inference.py
```
Some test examples are provided in inference.py.
|
sanjeev-bhandari01/phi-3-small-sft-lora | sanjeev-bhandari01 | 2024-05-24T06:02:25Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-24T05:42:37Z | ---
license: mit
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-128k-instruct
datasets:
- generator
model-index:
- name: phi-3-small-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-3-small-sft-lora
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6299 | 1.0 | 1 | 1.2966 |
| 0.6065 | 1.9692 | 2 | 1.2964 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
amztheory/code-python | amztheory | 2024-05-24T06:00:37Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T05:14:24Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: code-python
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-python
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4825 | 0.3183 | 30 | 1.5136 |
| 1.4361 | 0.6366 | 60 | 1.4485 |
| 1.3649 | 0.9549 | 90 | 1.3628 |
| 1.3254 | 1.2732 | 120 | 1.2862 |
| 1.1969 | 1.5915 | 150 | 1.2227 |
| 1.2245 | 1.9098 | 180 | 1.1776 |
| 1.1604 | 2.2281 | 210 | 1.1626 |
| 1.1737 | 2.5464 | 240 | 1.1594 |
| 1.1012 | 2.8647 | 270 | 1.1593 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
hgnoi/XfvVyaU3xPbDT0Fx | hgnoi | 2024-05-24T05:55:31Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T05:53:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
binayakkoirala/outputs | binayakkoirala | 2024-05-24T05:52:11Z | 1 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"gguf",
"mistral",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T10:26:11Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf | RichardErkhov | 2024-05-24T05:48:15Z | 93 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T02:45:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LuminRP-7B-128k-v0.2 - GGUF
- Model creator: https://huggingface.co/Ppoyaa/
- Original model: https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LuminRP-7B-128k-v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q2_K.gguf) | Q2_K | 2.53GB |
| [LuminRP-7B-128k-v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [LuminRP-7B-128k-v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [LuminRP-7B-128k-v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [LuminRP-7B-128k-v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [LuminRP-7B-128k-v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q3_K.gguf) | Q3_K | 3.28GB |
| [LuminRP-7B-128k-v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [LuminRP-7B-128k-v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [LuminRP-7B-128k-v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [LuminRP-7B-128k-v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [LuminRP-7B-128k-v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [LuminRP-7B-128k-v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [LuminRP-7B-128k-v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q4_K.gguf) | Q4_K | 4.07GB |
| [LuminRP-7B-128k-v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [LuminRP-7B-128k-v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [LuminRP-7B-128k-v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [LuminRP-7B-128k-v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [LuminRP-7B-128k-v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q5_K.gguf) | Q5_K | 4.78GB |
| [LuminRP-7B-128k-v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [LuminRP-7B-128k-v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [LuminRP-7B-128k-v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q6_K.gguf) | Q6_K | 5.53GB |
| [LuminRP-7B-128k-v0.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Ppoyaa_-_LuminRP-7B-128k-v0.2-gguf/blob/main/LuminRP-7B-128k-v0.2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
# LuminRP-7B-128k-v0.2
LuminRP-7B-128k-v0.2 is a merge of four RP models into one using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
This is a model that is purely for roleplaying and uses a context window of 128k.
# Example Response:
I use the **ChatML** template for this with **Instruct Mode enabled**. **Mistral** template is okay to use as well, but I don't recommend **Alpaca-Roleplay** because it just keeps going. Most likely because the **Alpaca-Roleplay** template doesn't have a message suffix.

# Quantized Version
**GGUF**: [Ppoyaa/LuminRP-7B-128k-v0.2-GGUF](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.2-GGUF)
# 🏆 Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.18|
|AI2 Reasoning Challenge (25-Shot)|70.56|
|HellaSwag (10-Shot) |87.46|
|MMLU (5-Shot) |64.92|
|TruthfulQA (0-shot) |65.78|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |67.93|
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-7B-128k-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
damgomz/ft_8_2e6_mlm_cv | damgomz | 2024-05-24T05:40:04Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-23T10:50:47Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-24T07:40:00'
project_name: ft_8_2e6_mlm_cv_emissions_tracker
run_id: cea1c3a0-30d8-4d7f-8455-62cb78a3d198
duration: 70137.06294441223
emissions: 0.0424410221797307
emissions_rate: 6.051154752996677e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.8280053444213326
gpu_energy: 0
ram_energy: 0.0730587981601555
energy_consumed: 0.9010641425814891
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 70137.06294441223 |
| Emissions (Co2eq in kg) | 0.0424410221797307 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8280053444213326 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0730587981601555 |
| Consumed energy (kWh) | 0.9010641425814891 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.13501384616799353 |
| Emissions (Co2eq in kg) | 0.027470349653228122 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_8_2e6_mlm_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 2e-06 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.471943 | 0.379341 | 0.827490 | 0.862092 |
| 1 | 0.341136 | 0.345278 | 0.845166 | 0.871145 |
| 2 | 0.304972 | 0.345030 | 0.845167 | 0.880203 |
| 3 | 0.269654 | 0.349133 | 0.847524 | 0.862721 |
| 4 | 0.229831 | 0.367654 | 0.841041 | 0.847553 |
| 5 | 0.180061 | 0.407419 | 0.834705 | 0.867764 |
|
Hyeyoon/OPEN-SOLAR-KO-10.7B-NLP | Hyeyoon | 2024-05-24T05:34:56Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-24T05:31:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/WizardLM-2-7B-abliterated-GGUF | mradermacher | 2024-05-24T05:34:35Z | 81 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:fearlessdots/WizardLM-2-7B-abliterated",
"base_model:quantized:fearlessdots/WizardLM-2-7B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T04:09:57Z | ---
base_model: fearlessdots/WizardLM-2-7B-abliterated
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-2-7B-abliterated-GGUF/resolve/main/WizardLM-2-7B-abliterated.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
damgomz/ft_8_4e6_mlm_cv | damgomz | 2024-05-24T05:32:13Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-23T10:48:35Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-24T07:32:11'
project_name: ft_8_4e6_mlm_cv_emissions_tracker
run_id: 540c605b-00bf-4a6b-9831-e338e3f3ffd0
duration: 69752.5319519043
emissions: 0.0422083433539418
emissions_rate: 6.051155731958987e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.8234658257799006
gpu_energy: 0
ram_energy: 0.072658319226404
energy_consumed: 0.8961241450063017
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 69752.5319519043 |
| Emissions (Co2eq in kg) | 0.0422083433539418 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8234658257799006 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.072658319226404 |
| Consumed energy (kWh) | 0.8961241450063017 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.13427362400741574 |
| Emissions (Co2eq in kg) | 0.027319741681162513 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_8_4e6_mlm_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.420520 | 0.346607 | 0.847082 | 0.877867 |
| 1 | 0.318169 | 0.338375 | 0.845903 | 0.855383 |
| 2 | 0.266432 | 0.351313 | 0.844136 | 0.855602 |
| 3 | 0.199349 | 0.386016 | 0.838390 | 0.867479 |
| 4 | 0.111738 | 0.455603 | 0.836033 | 0.839846 |
| 5 | 0.045670 | 0.558804 | 0.831907 | 0.844940 |
|
Stern5497/org_modelorg_model | Stern5497 | 2024-05-24T05:28:51Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-10T20:29:53Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: org_modelorg_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# org_modelorg_model
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0305
- F1 Micro: 0.7988
- F1 Macro: 0.7745
- F1 Weighted: 0.8091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|
| 1.7847 | 0.0064 | 25 | 1.4983 | 0.7827 | 0.7547 | 0.7929 |
| 1.3333 | 0.0127 | 50 | 1.2986 | 0.7926 | 0.7660 | 0.8031 |
| 1.2721 | 0.0191 | 75 | 1.2255 | 0.7755 | 0.7520 | 0.7862 |
| 1.127 | 0.0255 | 100 | 1.1722 | 0.7945 | 0.7694 | 0.8053 |
| 1.1108 | 0.0318 | 125 | 1.1561 | 0.7922 | 0.7556 | 0.7971 |
| 1.0969 | 0.0382 | 150 | 1.1181 | 0.7875 | 0.7581 | 0.7955 |
| 1.0714 | 0.0446 | 175 | 1.1001 | 0.7884 | 0.7658 | 0.7993 |
| 1.0219 | 0.0510 | 200 | 1.0758 | 0.8000 | 0.7727 | 0.8091 |
| 1.0979 | 0.0573 | 225 | 1.0671 | 0.7973 | 0.7656 | 0.8040 |
| 1.0846 | 0.0637 | 250 | 1.0632 | 0.7866 | 0.7582 | 0.7944 |
| 0.9977 | 0.0701 | 275 | 1.0590 | 0.7934 | 0.7600 | 0.7991 |
| 1.1262 | 0.0764 | 300 | 1.0404 | 0.7984 | 0.7699 | 0.8066 |
| 1.0066 | 0.0828 | 325 | 1.0396 | 0.7981 | 0.7681 | 0.8053 |
| 1.0534 | 0.0892 | 350 | 1.0360 | 0.8005 | 0.7768 | 0.8113 |
| 1.0302 | 0.0955 | 375 | 1.0320 | 0.7993 | 0.7754 | 0.8099 |
| 1.0965 | 0.1019 | 400 | 1.0305 | 0.7988 | 0.7745 | 0.8091 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2 | Zoyd | 2024-05-24T05:10:26Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-24T05:03:00Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_2bpw_exl2)**</center> | <center>4898 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2)**</center> | <center>5185 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_0bpw_exl2)**</center> | <center>5664 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_5bpw_exl2)**</center> | <center>6142 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2)**</center> | <center>6382 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2)**</center> | <center>6620 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_25bpw_exl2)**</center> | <center>6860 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-5_0bpw_exl2)**</center> | <center>7576 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2)**</center> | <center>8742 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_5bpw_exl2)**</center> | <center>9212 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2)**</center> | <center>9691 MB</center> | <center>8</center> |
# Model Card for Aya-23-8B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
``` |
tjasad/translation_slo_eng_opus-mt-sla-en_lora | tjasad | 2024-05-24T05:09:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-sla-en",
"base_model:adapter:Helsinki-NLP/opus-mt-sla-en",
"license:apache-2.0",
"region:us"
] | null | 2024-05-21T16:55:30Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: Helsinki-NLP/opus-mt-sla-en
metrics:
- bleu
model-index:
- name: translation_slo_eng_opus-mt-sla-en_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_slo_eng_opus-mt-sla-en_lora
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-sla-en](https://huggingface.co/Helsinki-NLP/opus-mt-sla-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7472
- Bleu: 34.2191
- Gen Len: 12.4026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.8995 | 1.0 | 2500 | 1.7472 | 34.2191 | 12.4026 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
hgnoi/pVr3Z630rkj39xic | hgnoi | 2024-05-24T05:08:32Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T05:06:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2 | Zoyd | 2024-05-24T05:03:50Z | 8 | 2 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-24T04:44:14Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_2bpw_exl2)**</center> | <center>4898 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2)**</center> | <center>5185 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_0bpw_exl2)**</center> | <center>5664 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_5bpw_exl2)**</center> | <center>6142 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2)**</center> | <center>6382 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2)**</center> | <center>6620 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_25bpw_exl2)**</center> | <center>6860 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-5_0bpw_exl2)**</center> | <center>7576 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2)**</center> | <center>8742 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_5bpw_exl2)**</center> | <center>9212 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2)**</center> | <center>9691 MB</center> | <center>8</center> |
# Model Card for Aya-23-8B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
``` |
Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2 | Zoyd | 2024-05-24T05:03:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-24T04:16:18Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_2bpw_exl2)**</center> | <center>4898 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2)**</center> | <center>5185 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_0bpw_exl2)**</center> | <center>5664 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_5bpw_exl2)**</center> | <center>6142 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2)**</center> | <center>6382 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2)**</center> | <center>6620 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_25bpw_exl2)**</center> | <center>6860 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-5_0bpw_exl2)**</center> | <center>7576 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2)**</center> | <center>8742 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_5bpw_exl2)**</center> | <center>9212 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2)**</center> | <center>9691 MB</center> | <center>8</center> |
# Model Card for Aya-23-8B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
``` |
Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2 | Zoyd | 2024-05-24T05:03:40Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-24T04:06:53Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_2bpw_exl2)**</center> | <center>4898 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2)**</center> | <center>5185 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_0bpw_exl2)**</center> | <center>5664 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_5bpw_exl2)**</center> | <center>6142 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2)**</center> | <center>6382 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2)**</center> | <center>6620 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_25bpw_exl2)**</center> | <center>6860 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-5_0bpw_exl2)**</center> | <center>7576 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2)**</center> | <center>8742 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_5bpw_exl2)**</center> | <center>9212 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2)**</center> | <center>9691 MB</center> | <center>8</center> |
# Model Card for Aya-23-8B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
``` |
Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2 | Zoyd | 2024-05-24T05:03:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-24T03:39:03Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_2bpw_exl2)**</center> | <center>4898 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-2_5bpw_exl2)**</center> | <center>5185 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_0bpw_exl2)**</center> | <center>5664 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_5bpw_exl2)**</center> | <center>6142 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-3_75bpw_exl2)**</center> | <center>6382 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_0bpw_exl2)**</center> | <center>6620 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-4_25bpw_exl2)**</center> | <center>6860 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-5_0bpw_exl2)**</center> | <center>7576 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_0bpw_exl2)**</center> | <center>8742 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-6_5bpw_exl2)**</center> | <center>9212 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-8B-8_0bpw_exl2)**</center> | <center>9691 MB</center> | <center>8</center> |
# Model Card for Aya-23-8B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
``` |
mradermacher/MuMath-Code-CL-34B-GGUF | mradermacher | 2024-05-24T05:00:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:weihao1/MuMath-Code-CL-34B",
"base_model:quantized:weihao1/MuMath-Code-CL-34B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:47:11Z | ---
base_model: weihao1/MuMath-Code-CL-34B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/weihao1/MuMath-Code-CL-34B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q2_K.gguf) | Q2_K | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.IQ3_XS.gguf) | IQ3_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q3_K_S.gguf) | Q3_K_S | 14.7 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.IQ3_M.gguf) | IQ3_M | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q3_K_L.gguf) | Q3_K_L | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.IQ4_XS.gguf) | IQ4_XS | 18.3 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q5_K_S.gguf) | Q5_K_S | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q5_K_M.gguf) | Q5_K_M | 23.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q6_K.gguf) | Q6_K | 27.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-34B-GGUF/resolve/main/MuMath-Code-CL-34B.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
xaviviro/Lorca-LLama3-8B-GGUF | xaviviro | 2024-05-24T05:00:37Z | 7 | 0 | null | [
"gguf",
"lorca",
"poemas",
"canciones",
"es",
"en",
"dataset:xaviviro/FEDERICO-GARCIA-LORCA-canciones-poemas-romances",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T04:36:03Z | ---
license: apache-2.0
datasets:
- xaviviro/FEDERICO-GARCIA-LORCA-canciones-poemas-romances
language:
- es
- en
tags:
- lorca
- poemas
- canciones
---
# Lorca Llama 3
## LM Studio preset:
```
Llama 3
```
## System prompt:
```
You are an advanced language model trained to write poems in the style of Federico García Lorca in Spanish. When given some themes, generate an original, well-structured poem in Spanish that reflects Lorca's unique style.
```
## Prompt:
```
Escribe un poema basado en estos temas en el estilo de Federico García Lorca:
[TEMAS]
```
## Ejemplo de prompt:
```
Escribe un poema basado en estos temas en el estilo de Federico García Lorca:
Barcelona, gentrificación, ciudad vendida, turismo de borrachera
```

|
tdnathmlenthusiast/mistral_instruct_generation | tdnathmlenthusiast | 2024-05-24T04:58:04Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T20:09:53Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
datasets:
- generator
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2713 | 1.8182 | 20 | 0.1395 |
| 0.0972 | 3.6364 | 40 | 0.0639 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
rawsem/buffettgpt-ft | rawsem | 2024-05-24T04:56:33Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T04:56:32Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: buffettgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# buffettgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.031 | 0.992 | 31 | 0.8610 |
| 0.8683 | 1.984 | 62 | 0.6911 |
| 0.7677 | 2.976 | 93 | 0.5970 |
| 0.7038 | 4.0 | 125 | 0.5716 |
| 0.6997 | 4.992 | 156 | 0.5594 |
| 0.6701 | 5.984 | 187 | 0.5642 |
| 0.6402 | 6.976 | 218 | 0.5723 |
| 0.5945 | 8.0 | 250 | 0.5855 |
| 0.5881 | 8.992 | 281 | 0.6030 |
| 0.5596 | 9.92 | 310 | 0.6124 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
damgomz/ft_16_4e6_mlm_cv | damgomz | 2024-05-24T04:54:07Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-23T10:46:37Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-24T06:54:05'
project_name: ft_16_4e6_mlm_cv_emissions_tracker
run_id: 3f56a463-50e7-4188-bb92-9e3115763b7d
duration: 67498.73499298096
emissions: 0.0408445349436233
emissions_rate: 6.051155617637977e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.7968585600329781
gpu_energy: 0
ram_energy: 0.070310607152929
energy_consumed: 0.8671691671859065
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 67498.73499298096 |
| Emissions (Co2eq in kg) | 0.0408445349436233 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7968585600329781 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.070310607152929 |
| Consumed energy (kWh) | 0.8671691671859065 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12993506486148834 |
| Emissions (Co2eq in kg) | 0.02643700453891754 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_16_4e6_mlm_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.453056 | 0.355866 | 0.842959 | 0.859240 |
| 1 | 0.330308 | 0.346280 | 0.843838 | 0.853032 |
| 2 | 0.289168 | 0.346006 | 0.844134 | 0.862091 |
| 3 | 0.240538 | 0.363388 | 0.843545 | 0.864176 |
| 4 | 0.178344 | 0.397027 | 0.838242 | 0.856995 |
| 5 | 0.105971 | 0.492622 | 0.827195 | 0.818774 |
|
OwOpeepeepoopoo/LittleJerry11 | OwOpeepeepoopoo | 2024-05-24T04:53:50Z | 85 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T04:52:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgyukang/test-8b-2 | hgyukang | 2024-05-24T04:50:59Z | 70 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-24T04:47:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/dJE9JNlxEtPNDZTo | hgnoi | 2024-05-24T04:45:16Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T04:41:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jellywibble/ChateauLafite8BLORA | Jellywibble | 2024-05-24T04:40:23Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T15:40:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tjasad/translation_slo_eng_opus-mt-sla-en_prompt_tuning | tjasad | 2024-05-24T04:32:16Z | 33 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-sla-en",
"base_model:adapter:Helsinki-NLP/opus-mt-sla-en",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T06:40:28Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: Helsinki-NLP/opus-mt-sla-en
metrics:
- bleu
model-index:
- name: translation_slo_eng_opus-mt-sla-en_prompt_tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_slo_eng_opus-mt-sla-en_prompt_tuning
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-sla-en](https://huggingface.co/Helsinki-NLP/opus-mt-sla-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1830
- Bleu: 0.3807
- Gen Len: 13.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 5.4024 | 1.0 | 2500 | 5.1830 | 0.3807 | 13.9075 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
raulgdp/Mistral-clasificacion-multilabel | raulgdp | 2024-05-24T04:29:09Z | 123 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:NazaGara/NER-fine-tuned-BETO",
"base_model:finetune:NazaGara/NER-fine-tuned-BETO",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-09T17:41:49Z | ---
license: cc-by-4.0
base_model: NazaGara/NER-fine-tuned-BETO
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-BETO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-BETO
Este es el modelo de BETO para NER [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Precision: 0.8424
- Recall: 0.8545
- F1: 0.8484
- Accuracy: 0.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0482 | 1.0 | 1041 | 0.1522 | 0.8309 | 0.8481 | 0.8394 | 0.9687 |
| 0.0302 | 2.0 | 2082 | 0.1661 | 0.8293 | 0.8527 | 0.8408 | 0.9696 |
| 0.0164 | 3.0 | 3123 | 0.1691 | 0.8403 | 0.8536 | 0.8469 | 0.9696 |
| 0.011 | 4.0 | 4164 | 0.2026 | 0.8427 | 0.8516 | 0.8471 | 0.9693 |
| 0.0073 | 5.0 | 5205 | 0.2140 | 0.8424 | 0.8545 | 0.8484 | 0.9691 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
trungtienluong/23cau10epochvacacthongsokhac | trungtienluong | 2024-05-24T04:28:38Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vilm/vinallama-7b-chat",
"base_model:adapter:vilm/vinallama-7b-chat",
"license:llama2",
"region:us"
] | null | 2024-05-24T04:28:30Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: vilm/vinallama-7b-chat
model-index:
- name: experiments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments
This model is a fine-tuned version of [vilm/vinallama-7b-chat](https://huggingface.co/vilm/vinallama-7b-chat) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2 |
RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf | RichardErkhov | 2024-05-24T04:27:59Z | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T01:05:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DPOpenHermes-11B - GGUF
- Model creator: https://huggingface.co/openaccess-ai-collective/
- Original model: https://huggingface.co/openaccess-ai-collective/DPOpenHermes-11B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DPOpenHermes-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q2_K.gguf) | Q2_K | 3.73GB |
| [DPOpenHermes-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [DPOpenHermes-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [DPOpenHermes-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [DPOpenHermes-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [DPOpenHermes-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q3_K.gguf) | Q3_K | 4.84GB |
| [DPOpenHermes-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [DPOpenHermes-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [DPOpenHermes-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [DPOpenHermes-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [DPOpenHermes-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [DPOpenHermes-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [DPOpenHermes-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q4_K.gguf) | Q4_K | 6.02GB |
| [DPOpenHermes-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [DPOpenHermes-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [DPOpenHermes-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [DPOpenHermes-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [DPOpenHermes-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q5_K.gguf) | Q5_K | 7.08GB |
| [DPOpenHermes-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [DPOpenHermes-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [DPOpenHermes-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q6_K.gguf) | Q6_K | 8.2GB |
| [DPOpenHermes-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-11B-gguf/blob/main/DPOpenHermes-11B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: apache-2.0
datasets:
- teknium/openhermes
- argilla/ultrafeedback-binarized-preferences
- Intel/orca_dpo_pairs
language:
- en
library_name: transformers
---
# DPOpenHermes 11B
This is a mergekit merge of DPOpenHermes-7B from seperate versions of it.
```
slices:
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
revision: dpo-v0
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
arjunshajitech/whisper-small-malayalam-v3 | arjunshajitech | 2024-05-24T04:26:12Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ml",
"dataset:mozilla-foundation/common_voice_14_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-23T16:53:08Z | ---
language:
- ml
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_14_0
metrics:
- wer
model-index:
- name: Whisper Small Malayalam - Arjun Shaji
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_14_0
config: ml
split: None
args: 'config: ml, split: test'
metrics:
- name: Wer
type: wer
value: 72.38930659983292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Malayalam - Arjun Shaji
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4878
- Wer: 72.3893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0081 | 9.6154 | 1000 | 0.3642 | 73.2665 |
| 0.0019 | 19.2308 | 2000 | 0.4225 | 71.8045 |
| 0.0002 | 28.8462 | 3000 | 0.4564 | 73.3918 |
| 0.0 | 38.4615 | 4000 | 0.4809 | 72.5564 |
| 0.0 | 48.0769 | 5000 | 0.4878 | 72.3893 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
johannp/ppo-LunarLander-v1 | johannp | 2024-05-24T04:24:08Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T04:23:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.57 +/- 16.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ponleur/vivit-MSASL-dataset | Ponleur | 2024-05-24T04:19:53Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"vivit",
"video-classification",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-05-24T04:06:36Z | ---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: video-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Reynaerde-7B-v2-GGUF | mradermacher | 2024-05-24T04:12:34Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:vandeju/ultrachat_combined_sample_nl",
"base_model:vandeju/Reynaerde-7B-v2",
"base_model:quantized:vandeju/Reynaerde-7B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T02:27:20Z | ---
base_model: vandeju/Reynaerde-7B-v2
datasets:
- vandeju/ultrachat_combined_sample_nl
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- alignment-handbook
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/vandeju/Reynaerde-7B-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Reynaerde-7B-v2-GGUF/resolve/main/Reynaerde-7B-v2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
janhq/llama3 | janhq | 2024-05-24T04:06:20Z | 8 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2024-05-15T09:35:53Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity\u2019s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and\
\ Documentation (and any portion thereof) made available under this Agreement.\n\
\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or,\
\ if you are an entity, your principal place of business is in the EEA or Switzerland)\
\ and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta\u2019s intellectual property or other rights owned by Meta embodied\
\ in the Llama Materials to use, reproduce, distribute, copy, create derivative\
\ works of, and make modifications to the Llama Materials.\nb. Redistribution and\
\ Use.\ni. If you distribute or make available the Llama Materials (or any derivative\
\ works thereof), or a product or service that uses any of them, including another\
\ AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display \u201CBuilt with Meta Llama 3\u201D on a related website,\
\ user interface, blogpost, about page, or product documentation. If you use the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include \u201CLlama 3\u201D\
\ at the beginning of any such AI model name.\nii. If you receive Llama Materials,\
\ or any derivative works thereof, from a Licensee as part of an integrated end\
\ user product, then Section 2 of this Agreement will not apply to you.\niii. You\
\ must retain in all copies of the Llama Materials that you distribute the following\
\ attribution notice within a \u201CNotice\u201D text file distributed as a part\
\ of such copies: \u201CMeta Llama 3 is licensed under the Meta Llama 3 Community\
\ License, Copyright \xA9 Meta Platforms, Inc. All Rights Reserved.\u201D\niv. Your\
\ use of the Llama Materials must comply with applicable laws and regulations (including\
\ trade compliance laws and regulations) and adhere to the Acceptable Use Policy\
\ for the Llama Materials (available at https://llama.meta.com/llama3/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\nv. You will not\
\ use the Llama Materials or any output or results of the Llama Materials to improve\
\ any other large language model (excluding Meta Llama 3 or derivative works thereof).\n\
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the\
\ monthly active users of the products or services made available by or for Licensee,\
\ or Licensee\u2019s affiliates, is greater than 700 million monthly active users\
\ in the preceding calendar month, you must request a license from Meta, which Meta\
\ may grant to you in its sole discretion, and you are not authorized to exercise\
\ any of the rights under this Agreement unless or until Meta otherwise expressly\
\ grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE\
\ LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON\
\ AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required for\
\ reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ \u201CLlama 3\u201D (the \u201CMark\u201D) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines\
\ (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives\
\ made by or for Meta, with respect to any derivative works and modifications of\
\ the Llama Materials that are made by you, as between you and Meta, you are and\
\ will be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama\
\ 3 outputs or results, or any portion of any of the foregoing, constitutes infringement\
\ of intellectual property or other rights owned or licensable by you, then any\
\ licenses granted to you under this Agreement shall terminate as of the date such\
\ litigation or claim is filed or instituted. You will indemnify and hold harmless\
\ Meta from and against any claim by any third party arising out of or related to\
\ your use or distribution of the Llama Materials.\n6. Term and Termination. The\
\ term of this Agreement will commence upon your acceptance of this Agreement or\
\ access to the Llama Materials and will continue in full force and effect until\
\ terminated in accordance with the terms and conditions herein. Meta may terminate\
\ this Agreement if you are in breach of any term or condition of this Agreement.\
\ Upon termination of this Agreement, you shall delete and cease use of the Llama\
\ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n\
7. Governing Law and Jurisdiction. This Agreement will be governed and construed\
\ under the laws of the State of California without regard to choice of law principles,\
\ and the UN Convention on Contracts for the International Sale of Goods does not\
\ apply to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (\u201CPolicy\u201D). The most recent copy of this policy can be found\
\ at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software \u201Cbug,\u201D or other problems that could lead to\
\ a violation of this Policy through one of the following means:\n * Reporting\
\ issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_creator: meta-llama
model_name: Meta-Llama-3-8B-Instruct
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Model description: [Readme](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
DidNothing/Smaug-Llama-3-70B-Instruct-abliterated-v3-exl2-bpw6.0 | DidNothing | 2024-05-24T03:56:54Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-23T22:21:49Z | 6bpw exl2 quant of https://huggingface.co/failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3. |
hgnoi/ul1MYxU5iaiw3L31 | hgnoi | 2024-05-24T03:54:06Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:50:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GENIAC-Team-Ozaki/full-sft-finetuned-stage4-iter86000-v3 | GENIAC-Team-Ozaki | 2024-05-24T03:53:48Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:48:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/Yi-1.5-6B-GGUF | QuantFactory | 2024-05-24T03:48:05Z | 115 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"base_model:01-ai/Yi-1.5-6B",
"base_model:quantized:01-ai/Yi-1.5-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:05:54Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
base_model: 01-ai/Yi-1.5-6B
---
# Yi-1.5-6B-GGUF
- This is quantized version of [01-ai/Yi-1.5-6B](https://huggingface.co/01-ai/Yi-1.5-6B) created using llama.cpp
# Model Description
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5). |
yuyijiong/swin-v2-base-remote-sensing-quality | yuyijiong | 2024-05-24T03:46:52Z | 150 | 0 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-classification",
"zh",
"arxiv:2307.11965",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-04-20T07:01:16Z | ---
license: cc-by-nc-4.0
language:
- zh
---
[github](https://github.com/yuyijiong/remote_sense_image_quality_inspection)
[paper](https://arxiv.org/abs/2307.11965)
使用swin v2模型检测遥感图像中是否包含以下9种类型的质量问题:
"0": "云",
"1": "阴影",
"2": "拉花",
"3": "模糊",
"4": "光谱溢出",
"5": "扭曲",
"6": "拼接痕迹",
"7": "拼接错误",
"8": "条状噪声"
模型输出为9维向量,每一维的值代表图片中存在此类质量问题的概率,概率大于50%视为存在此类质量问题。
|
KimByeongSu/gpt-neo-2.7B-cs-finetuning-7-60000 | KimByeongSu | 2024-05-24T03:38:56Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-2.7B",
"base_model:finetune:EleutherAI/gpt-neo-2.7B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T06:53:37Z | ---
license: mit
tags:
- generated_from_trainer
base_model: EleutherAI/gpt-neo-2.7B
model-index:
- name: gpt-neo-2.7B-cs-finetuning-7-60000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-2.7B-cs-finetuning-7-60000
This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5787 | 1.0 | 765 | 2.4157 |
| 1.8828 | 2.0 | 1530 | 2.3721 |
| 1.4804 | 3.0 | 2295 | 2.4477 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1
- Datasets 2.12.0
- Tokenizers 0.15.1
|
hgnoi/tIpj9Kl1aoz9GKoT | hgnoi | 2024-05-24T03:28:06Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:26:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B | grimjim | 2024-05-24T03:27:11Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:merge:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T20:01:56Z | ---
base_model:
- mistralai/Mistral-7B-v0.3
- mistralai/Mistral-7B-Instruct-v0.3
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Mistral-7B-Instruct-demi-merge-v0.3-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This is a blend of base and instruct models, intended to enable fine-tuning and/or merging (by anyone).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
* [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.3
layer_range: [0,32]
- model: mistralai/Mistral-7B-v0.3
layer_range: [0,32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.3
parameters:
t:
- value: 0.5
dtype: bfloat16
```
|
GeneStory/Classify-model-v1.2 | GeneStory | 2024-05-24T03:25:11Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"new",
"text-classification",
"generated_from_trainer",
"custom_code",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-classification | 2024-05-14T08:51:31Z | ---
license: apache-2.0
base_model: Alibaba-NLP/gte-base-en-v1.5
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Classify-model-v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classify-model-v1.2
This model is a fine-tuned version of [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2901
- Accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0269 | 1.0 | 11 | 0.3949 | 0.8482 |
| 0.2124 | 2.0 | 22 | 0.2804 | 0.8929 |
| 0.0651 | 3.0 | 33 | 0.2142 | 0.9107 |
| 0.0224 | 4.0 | 44 | 0.2613 | 0.9196 |
| 0.0111 | 5.0 | 55 | 0.3168 | 0.9375 |
| 0.0082 | 6.0 | 66 | 0.2658 | 0.9464 |
| 0.0053 | 7.0 | 77 | 0.2768 | 0.9464 |
| 0.0049 | 8.0 | 88 | 0.2901 | 0.9464 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Sorour/cls_fomc_llama3_v1 | Sorour | 2024-05-24T03:16:49Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-18T18:50:53Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: cls_fomc_llama3_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cls_fomc_llama3_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6624 | 0.6723 | 20 | 0.6671 |
| 0.5462 | 1.3445 | 40 | 0.6559 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
BotCuddles/llama3_try1_adapter | BotCuddles | 2024-05-24T03:14:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T02:28:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adjohn1313/gptj-allenai_toxicity-explainable-no-avg | adjohn1313 | 2024-05-24T03:04:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gptj",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:00:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgnoi/vJmQHNJ6dQ8ZflYc | hgnoi | 2024-05-24T03:04:19Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T03:02:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GENIAC-Team-Ozaki/lora-dpo-finetuned-stage4-full-sft-v2-0.5_1e-6_ep-1 | GENIAC-Team-Ozaki | 2024-05-24T02:59:53Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T02:49:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saransh03sharma/mintrec2-mistral-2-7b-200-5 | saransh03sharma | 2024-05-24T02:47:16Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T02:41:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/MedQA_L3_350steps_1e7rate_03beta_CSFTDPO | tsavage68 | 2024-05-24T02:42:23Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T02:38:07Z | ---
license: llama3
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_350steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_350steps_1e7rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6516
- Rewards/chosen: 0.2738
- Rewards/rejected: 0.1790
- Rewards/accuracies: 0.7099
- Rewards/margins: 0.0948
- Logps/rejected: -33.2582
- Logps/chosen: -30.4158
- Logits/rejected: -0.7313
- Logits/chosen: -0.7305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 350
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6925 | 0.0489 | 50 | 0.6930 | -0.0016 | -0.0023 | 0.5011 | 0.0007 | -33.8624 | -31.3338 | -0.7320 | -0.7314 |
| 0.6841 | 0.0977 | 100 | 0.6807 | 0.2459 | 0.2195 | 0.6549 | 0.0264 | -33.1233 | -30.5088 | -0.7330 | -0.7323 |
| 0.6524 | 0.1466 | 150 | 0.6658 | 0.3522 | 0.2898 | 0.6703 | 0.0624 | -32.8887 | -30.1544 | -0.7315 | -0.7308 |
| 0.631 | 0.1954 | 200 | 0.6545 | 0.1829 | 0.0948 | 0.6923 | 0.0881 | -33.5389 | -30.7188 | -0.7310 | -0.7303 |
| 0.6675 | 0.2443 | 250 | 0.6520 | 0.2481 | 0.1544 | 0.7121 | 0.0938 | -33.3403 | -30.5014 | -0.7309 | -0.7301 |
| 0.6479 | 0.2931 | 300 | 0.6509 | 0.2738 | 0.1773 | 0.7099 | 0.0966 | -33.2640 | -30.4157 | -0.7310 | -0.7303 |
| 0.6583 | 0.3420 | 350 | 0.6516 | 0.2738 | 0.1790 | 0.7099 | 0.0948 | -33.2582 | -30.4158 | -0.7313 | -0.7305 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
princeton-nlp/Llama-3-Base-8B-SFT-SimPO | princeton-nlp | 2024-05-24T02:40:41Z | 6,067 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T02:37:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
songhyundong/distilbert-base-uncased-finetuned-squad | songhyundong | 2024-05-24T02:39:23Z | 127 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-24T00:20:59Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3025 | 1.0 | 2767 | 1.2238 |
| 1.029 | 2.0 | 5534 | 1.1357 |
| 0.8727 | 3.0 | 8301 | 1.1512 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
dtorber/BioNLP-tech-intro-disc-PLOS | dtorber | 2024-05-24T02:39:12Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-05-12T21:49:18Z | ---
tags:
- summarization
- generated_from_trainer
model-index:
- name: BioNLP-tech-intro-disc-PLOS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-tech-intro-disc-PLOS
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
mradermacher/MuMath-Code-CL-7B-GGUF | mradermacher | 2024-05-24T02:36:23Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:weihao1/MuMath-Code-CL-7B",
"base_model:quantized:weihao1/MuMath-Code-CL-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:46:34Z | ---
base_model: weihao1/MuMath-Code-CL-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/weihao1/MuMath-Code-CL-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-CL-7B-GGUF/resolve/main/MuMath-Code-CL-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AnhLedger/Finetune_llama3 | AnhLedger | 2024-05-24T02:33:25Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"region:us"
] | null | 2024-05-24T02:31:03Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
ahmedgongi/Llama_dev3tokenizer_finale7 | ahmedgongi | 2024-05-24T02:32:51Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T17:41:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MuMath-Code-L-7B-GGUF | mradermacher | 2024-05-24T02:30:04Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:weihao1/MuMath-Code-L-7B",
"base_model:quantized:weihao1/MuMath-Code-L-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:48:26Z | ---
base_model: weihao1/MuMath-Code-L-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/weihao1/MuMath-Code-L-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MuMath-Code-L-7B-GGUF/resolve/main/MuMath-Code-L-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ruidanwang/distilbert-base-uncased-finetuned-imdb | ruidanwang | 2024-05-24T02:28:54Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-24T02:25:28Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.527 | 3.0 | 471 | 2.4823 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko | youjunhyeok | 2024-05-24T02:16:01Z | 2,245 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T05:09:27Z | ---
license: apache-2.0
library_name: transformers
---
## Model
- base model: [meta-llama/Meta-Llama-3-8B-Instruct](DavidAhn/Llama-3-8B-slerp-262k)
- parent model: [DavidAhn/Llama-3-8B-slerp-262k](https://huggingface.co/DavidAhn/Llama-3-8B-slerp-262k)
## Dataset
- [youjunhyeok/llama3_train](https://huggingface.co/datasets/youjunhyeok/llama3_train)
## BenchMark (KOR)
```
# alias
A = youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko
B = DavidAhn/Llama-3-8B-slerp-262k
C = meta-llama/Meta-Llama-3-8B
D = chihoonlee10/T3Q-ko-solar-dpo-v7.0 (24.05.24 ko 리더보드 1등)
```
| Benchmark (macro_f1) | A | B | C | D |
|---------------------------|:----:|:----:|:----:|:----:|
| kobest_boolq (0-shot) | 57.6 | 33.5 | 38.2 | 34.1 |
| kobest_boolq (5-shot) | 77.9 | 68.8 | 83.8 | 93.1 |
| kobest_copa (0-shot) | 59.9 | 58.5 | 63.1 | 81.0 |
| kobest_copa (5-shot) | 61.4 | 61.7 | 69.1 | 91.0 |
| kobest_hellaswag (0-shot) | 40.6 | 43.2 | 42.1 | 55.1 |
| kobest_hellaswag (5-shot) | 41.5 | 45.3 | 44.2 | 55.2 |
| kobest_sentineg (0-shot) | 61.1 | 34.8 | 51.5 | 82.7 |
| kobest_sentineg (5-shot) | 92.4 | 85.8 | 94.7 | 91.4 |
## BenchMark (ENG)
| | openbookqa | hellaswag | boolq | arc_easy | arc_challenge |
|:----------------------------------------------|---------:|---------:|---------:|---------:|---------:|
| youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko | 0.334 | 0.575 | 0.778 | 0.763 | 0.471 |
| DavidAhn/Llama-3-8B-slerp-262k | 0.312 | 0.587 | 0.832 | 0.808 | 0.518 |
| meta-llama/Meta-Llama-3-8B-Instruct | 0.338 | 0.576 | 0.831 | 0.815 | 0.529 | |
Ksgk-fy/Meta-Llama-3-8B-4bit-GPTQ | Ksgk-fy | 2024-05-24T02:08:42Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-24T01:14:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ksgk-fy/Meta-Llama-3-8B-4bit-GPTQ-Phil | Ksgk-fy | 2024-05-24T02:08:28Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-24T01:40:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ltg/mt0-definition-en-xl | ltg | 2024-05-24T01:57:33Z | 171 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"definition-modeling",
"en",
"dataset:marksverdhei/wordnet-definitions-en-2021",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-18T22:27:03Z | ---
tags:
- text2text-generation
- definition-modeling
metrics:
- rouge
model-index:
- name: mt0-definition-en-xl
results: []
language:
- en
widget:
- text: "He ate a sweet apple. What is the definition of apple?"
example_title: "Definition generation"
- text: "The paper contains a number of original ideas about color perception. What is the definition of original?"
example_title: "Definition generation"
license: cc-by-sa-4.0
datasets:
- marksverdhei/wordnet-definitions-en-2021
---
# mT0-Definition-En XL
This model is a version of [mT0 XL](https://huggingface.co/bigscience/mt0-xl) finetuned on a dataset of English definitions and usage examples.
It generates definitions of English words in context.
Its input is the usage example and the instruction question "What is the definiton of TARGET_WORD?"
## Models for other languages:
- English: [mT0-Definition-En XL](https://huggingface.co/ltg/mt0-definition-en-xl)
- Norwegian: [mT0-Definition-No XL](https://huggingface.co/ltg/mt0-definition-no-xl)
- Russian: [mT0-Definition-Ru XL](https://huggingface.co/ltg/mt0-definition-ru-xl)
## Model description
See details in the paper [Enriching Word Usage Graphs with Cluster Definitions](https://aclanthology.org/2024.lrec-main.546/) (LREC-COLING'2024) by
Mariia Fedorova, Andrey Kutuzov, Nikolay Arefyev and Dominik Schlechtweg.
## Intended uses & limitations
The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions.
Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model.
## Training and evaluation data
Three datasets were used to fine-tune the model:
- *WordNet* ([Ishiwatari et al., NAACL 2019](https://aclanthology.org/N19-1350/)), also [available on HF](https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021)
- *Oxford dictionary or CHA* ([Gadetsky et al., ACL 2018](https://aclanthology.org/P18-2043/))
- English subset of *CodWoE* ([Mickus et al., SemEval 2022](https://aclanthology.org/2022.semeval-1.1/))
## Training results
mT0-Definition-En XL achieves the following results on concatenated validations sets from WordNet and Oxford dictionary:
- Loss: 1.7210
- Rouge1: 41.5067
- Rouge2: 23.7149
- Rougel: 39.138
- Rougelsum: 39.1647
- Gen Len: 15.1578
## Training procedure
mT0-Definition-En XL was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+rocm5.2
- Datasets 2.12.0
- Tokenizers 0.12.1
## Citation
```
@inproceedings{kutuzov-etal-2024-enriching-word,
title = "Enriching Word Usage Graphs with Cluster Definitions",
author = "Kutuzov, Andrey and
Fedorova, Mariia and
Schlechtweg, Dominik and
Arefyev, Nikolay",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.546",
pages = "6189--6198",
abstract = "We present a dataset of word usage graphs (WUGs), where the existing WUGs for multiple languages are enriched with cluster labels functioning as sense definitions. They are generated from scratch by fine-tuned encoder-decoder language models. The conducted human evaluation has shown that these definitions match the existing clusters in WUGs better than the definitions chosen from WordNet by two baseline systems. At the same time, the method is straightforward to use and easy to extend to new languages. The resulting enriched datasets can be extremely helpful for moving on to explainable semantic change modeling.",
}
```
|
ltg/mt0-definition-no-xl | ltg | 2024-05-24T01:56:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"definition-modeling",
"no",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-18T23:45:47Z | ---
tags:
- text2text-generation
- definition-modeling
metrics:
- rouge
model-index:
- name: mt0-definition-no-xl
results: []
language:
- no
widget:
- text: "Ha egen brygge og båthus. Hva betyr båthus?"
example_title: "Definition generation"
license: cc-by-sa-4.0
---
# mT0-Definition-No XL
This model is a version of [mT0 XL](https://huggingface.co/bigscience/mt0-xl) finetuned on [Bokmålsordboka](https://ordbokene.no/),
a dataset of Norwegian definitions and usage examples.
It generates definitions of Norwegian words in context.
Its input is the usage example and the instruction question "Hva betyr TARGET_WORD?"
## Models for other languages:
- English: [mT0-Definition-En XL](https://huggingface.co/ltg/mt0-definition-en-xl)
- Norwegian: [mT0-Definition-No XL](https://huggingface.co/ltg/mt0-definition-no-xl)
- Russian: [mT0-Definition-Ru XL](https://huggingface.co/ltg/mt0-definition-ru-xl)
## Model description
See details in the paper [Enriching Word Usage Graphs with Cluster Definitions](https://aclanthology.org/2024.lrec-main.546/) (LREC-COLING'2024) by
Mariia Fedorova, Andrey Kutuzov, Nikolay Arefyev and Dominik Schlechtweg.
## Intended uses & limitations
The model is intended for research purposes, as a source of contextualized dictionary-like lexical definitions.
Generated definitions can contain all sorts of biases and stereotypes, stemming from the underlying language model.
## Training and evaluation data
[Bokmålsordboka](https://ordbokene.no/) by The Norwegian Language Council and the University of Bergen.
## Training results
mT0-Definition-No XL achieves the following results on the evaluation set:
- Loss: 2.0358
- Rouge1: 28.3491
- Rouge2: 14.2699
- Rougel: 27.7602
- Rougelsum: 27.752
- Gen Len: 10.0765
## Training procedure
mT0-Definition-No XL was fine-tuned in a sequence-to-sequence mode on examples of contextualized dictionary definitions.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Framework versions
- Transformers 4.37.1
- Pytorch 1.13.1+rocm5.2
- Datasets 2.16.1
- Tokenizers 0.15.1
## Citation
```
@inproceedings{kutuzov-etal-2024-enriching-word,
title = "Enriching Word Usage Graphs with Cluster Definitions",
author = "Kutuzov, Andrey and
Fedorova, Mariia and
Schlechtweg, Dominik and
Arefyev, Nikolay",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.546",
pages = "6189--6198",
abstract = "We present a dataset of word usage graphs (WUGs), where the existing WUGs for multiple languages are enriched with cluster labels functioning as sense definitions. They are generated from scratch by fine-tuned encoder-decoder language models. The conducted human evaluation has shown that these definitions match the existing clusters in WUGs better than the definitions chosen from WordNet by two baseline systems. At the same time, the method is straightforward to use and easy to extend to new languages. The resulting enriched datasets can be extremely helpful for moving on to explainable semantic change modeling.",
}
```
|
hgnoi/XeBWkEf2mW77tWaY | hgnoi | 2024-05-24T01:54:29Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:52:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MoMonir/Phi-3-medium-128k-instruct-GGUF | MoMonir | 2024-05-24T01:50:52Z | 23 | 1 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-24T01:35:46Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# MoMonir/Phi-3-medium-128k-instruct-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
<!-- README_GGUF.md-about-gguf start -->
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo MoMonir/Phi-3-medium-128k-instruct-GGUF --model Phi-3-medium-128k-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo MoMonir/Phi-3-medium-128k-instruct-GGUF --model Phi-3-medium-128k-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m Phi-3-medium-128k-instruct.Q4_K_M.gguf -n 128
```
|
nguyenhacker/vit-base-patch16-224-in21k-finetuned-lora-food101 | nguyenhacker | 2024-05-24T01:50:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T01:33:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dmitrii-a-lex/llama3-8b-csn-280step-lora | dmitrii-a-lex | 2024-05-24T01:47:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T01:47:21Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** dmitrii-a-lex
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ziqin/ziqin | ziqin | 2024-05-24T01:46:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T01:46:37Z | ---
license: apache-2.0
---
|
T3Q-LLM/T3Q-LLM2-sft1.2 | T3Q-LLM | 2024-05-24T01:45:51Z | 41 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T00:16:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-sft1.2,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.9416|± |0.0063|
| | |macro_f1|0.9415|± |0.0063|
|kobest_copa | 0|acc |0.7730|± |0.0133|
| | |macro_f1|0.7725|± |0.0133|
|kobest_hellaswag| 0|acc |0.5100|± |0.0224|
| | |acc_norm|0.5740|± |0.0221|
| | |macro_f1|0.5074|± |0.0223|
|kobest_sentineg | 0|acc |0.7632|± |0.0214|
| | |macro_f1|0.7545|± |0.0220| |
Liuza1/test1111 | Liuza1 | 2024-05-24T01:45:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T01:45:42Z | ---
license: apache-2.0
---
|
Timeshift/distilbert-base-uncased-finetuned-emotion | Timeshift | 2024-05-24T01:45:25Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-24T01:40:44Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9268799638507115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8393 | 1.0 | 250 | 0.3184 | 0.904 | 0.9027 |
| 0.2532 | 2.0 | 500 | 0.2204 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jspr/talosian_v3_merged | jspr | 2024-05-24T01:43:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:33:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: mistralai/Mistral-7B-v0.3
---
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Mistral-7B-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
loyahdev/loyahgpt | loyahdev | 2024-05-24T01:36:59Z | 0 | 0 | flair | [
"flair",
"en",
"dataset:HuggingFaceFW/fineweb",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T01:36:06Z | ---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb
language:
- en
metrics:
- accuracy
library_name: flair
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jspr/talosian_v3_peft | jspr | 2024-05-24T01:31:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T01:31:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: mistralai/Mistral-7B-v0.3
---
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Mistral-7B-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hgnoi/F6jP9m5a0fiKKmSu | hgnoi | 2024-05-24T01:31:15Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:27:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
junaidiqbalsyed/juanid-phi3-instruct-finetuned-GGUF | junaidiqbalsyed | 2024-05-24T01:27:06Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T01:20:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** junaidiqbalsyed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlignmentResearch/robust_llm_pythia-14m-imdb-gen-ian-nd | AlignmentResearch | 2024-05-24T01:26:36Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:26:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Arbi-Houssem/mms_tts_tun_Lang1.1 | Arbi-Houssem | 2024-05-24T01:24:40Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-05-24T00:15:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mistral-C64Wizard-instruct-GGUF | mradermacher | 2024-05-24T01:14:39Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:pechaut/Mistral-C64Wizard-instruct",
"base_model:quantized:pechaut/Mistral-C64Wizard-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T23:51:33Z | ---
base_model: pechaut/Mistral-C64Wizard-instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/pechaut/Mistral-C64Wizard-instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
junaidiqbalsyed/juanid-phi3-instruct-finetuned | junaidiqbalsyed | 2024-05-24T01:10:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T01:10:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** junaidiqbalsyed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
T3Q-LLM/T3Q-LLM2-FP-v2.0 | T3Q-LLM | 2024-05-24T01:08:46Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T04:54:06Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-FP-v2.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5085|± |0.0133|
| | |macro_f1|0.3496|± |0.0076|
|kobest_copa | 0|acc |0.7680|± |0.0134|
| | |macro_f1|0.7677|± |0.0134|
|kobest_hellaswag| 0|acc |0.4920|± |0.0224|
| | |acc_norm|0.5740|± |0.0221|
| | |macro_f1|0.4889|± |0.0223|
|kobest_sentineg | 0|acc |0.6826|± |0.0234|
| | |macro_f1|0.6616|± |0.0244| |
mradermacher/Athena-Mistral-7b-v0.2-GGUF | mradermacher | 2024-05-24T01:07:25Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"dataset:NotAiLOL/Athena-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T21:52:09Z | ---
base_model: NotAiLOL/Athena-Mistral-7b-v0.2
datasets:
- NotAiLOL/Athena-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Athena-Mistral-7b-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hgnoi/4zorKjh2MTNaftD3 | hgnoi | 2024-05-24T01:05:26Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:03:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Augusto777/vit-base-patch16-224-RU3-40 | Augusto777 | 2024-05-24T01:04:43Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T00:36:28Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RU3-40
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8333333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RU3-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5667
- Accuracy: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3821 | 0.99 | 19 | 1.3119 | 0.4833 |
| 1.2698 | 1.97 | 38 | 1.0852 | 0.6167 |
| 0.9819 | 2.96 | 57 | 0.8757 | 0.7 |
| 0.6671 | 4.0 | 77 | 0.7689 | 0.7333 |
| 0.4248 | 4.99 | 96 | 0.7294 | 0.7167 |
| 0.3005 | 5.97 | 115 | 0.6518 | 0.7833 |
| 0.2035 | 6.96 | 134 | 0.5667 | 0.8333 |
| 0.2195 | 8.0 | 154 | 0.6646 | 0.8333 |
| 0.1654 | 8.99 | 173 | 0.6294 | 0.8167 |
| 0.1581 | 9.97 | 192 | 0.7211 | 0.7833 |
| 0.1338 | 10.96 | 211 | 0.8129 | 0.7833 |
| 0.1188 | 12.0 | 231 | 0.7925 | 0.8167 |
| 0.1179 | 12.99 | 250 | 0.9588 | 0.7667 |
| 0.1017 | 13.97 | 269 | 1.0875 | 0.7167 |
| 0.0845 | 14.96 | 288 | 0.9355 | 0.7 |
| 0.1109 | 16.0 | 308 | 0.9387 | 0.8167 |
| 0.0711 | 16.99 | 327 | 1.1214 | 0.7333 |
| 0.0884 | 17.97 | 346 | 0.9688 | 0.7667 |
| 0.0668 | 18.96 | 365 | 1.0306 | 0.8 |
| 0.0716 | 20.0 | 385 | 1.2653 | 0.7167 |
| 0.0643 | 20.99 | 404 | 0.9894 | 0.7833 |
| 0.0517 | 21.97 | 423 | 1.0439 | 0.7667 |
| 0.0597 | 22.96 | 442 | 1.1470 | 0.7667 |
| 0.0533 | 24.0 | 462 | 1.0848 | 0.7833 |
| 0.0529 | 24.99 | 481 | 1.1481 | 0.75 |
| 0.0524 | 25.97 | 500 | 1.1322 | 0.7333 |
| 0.0525 | 26.96 | 519 | 1.1868 | 0.7333 |
| 0.0517 | 28.0 | 539 | 1.1561 | 0.7167 |
| 0.0309 | 28.99 | 558 | 1.0562 | 0.7833 |
| 0.0403 | 29.97 | 577 | 1.2901 | 0.7333 |
| 0.0392 | 30.96 | 596 | 1.1295 | 0.7667 |
| 0.0404 | 32.0 | 616 | 1.1198 | 0.7667 |
| 0.0381 | 32.99 | 635 | 1.2986 | 0.7167 |
| 0.0262 | 33.97 | 654 | 1.1655 | 0.75 |
| 0.0354 | 34.96 | 673 | 1.1223 | 0.7833 |
| 0.0224 | 36.0 | 693 | 1.1679 | 0.7833 |
| 0.0244 | 36.99 | 712 | 1.0999 | 0.8167 |
| 0.0368 | 37.97 | 731 | 1.1213 | 0.7833 |
| 0.0199 | 38.96 | 750 | 1.1003 | 0.8 |
| 0.028 | 39.48 | 760 | 1.0989 | 0.8 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits