modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 18:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
anchovy/maple728-time_moe_200M | anchovy | 2024-11-06T19:33:39Z | 9 | 0 | null | [
"safetensors",
"time_moe",
"time-series-forecasting",
"custom_code",
"arxiv:2409.16040",
"license:apache-2.0",
"region:us"
] | time-series-forecasting | 2024-11-06T19:33:39Z | ---
license: apache-2.0
pipeline_tag: time-series-forecasting
---
# Model Card for TimeMoE
This repository contains the weights of the TimeMoE-200M model of the paper [Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts](https://huggingface.co/papers/2409.16040).
For details on how to use this model, please visit our [GitHub page](https://github.com/time-moe/time-moe). |
kaiwenw/oct31_oasst_llama70b_jft | kaiwenw | 2024-11-06T19:30:29Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T04:13:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vishnun0027/Llama-3.2-1B-Instruct-Indian-Law | vishnun0027 | 2024-11-06T19:27:57Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T19:26:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
emozilla/smol-15b-init | emozilla | 2024-11-06T19:11:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T18:42:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-6.9b-deduped-int8-step64-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-06T19:11:03Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-06T19:09:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf | RichardErkhov | 2024-11-06T19:07:44Z | 31 | 0 | null | [
"gguf",
"arxiv:2311.03099",
"arxiv:2306.01708",
"endpoints_compatible",
"region:us"
] | null | 2024-11-05T17:39:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
code-llama-70b-python-instruct - GGUF
- Model creator: https://huggingface.co/NobodyExistsOnTheInternet/
- Original model: https://huggingface.co/NobodyExistsOnTheInternet/code-llama-70b-python-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [code-llama-70b-python-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q2_K.gguf) | Q2_K | 23.71GB |
| [code-llama-70b-python-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [code-llama-70b-python-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [code-llama-70b-python-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [code-llama-70b-python-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [code-llama-70b-python-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q3_K.gguf) | Q3_K | 30.99GB |
| [code-llama-70b-python-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [code-llama-70b-python-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [code-llama-70b-python-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [code-llama-70b-python-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q4_0.gguf) | Q4_0 | 36.2GB |
| [code-llama-70b-python-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [code-llama-70b-python-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/blob/main/code-llama-70b-python-instruct.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [code-llama-70b-python-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q4_K | 38.58GB |
| [code-llama-70b-python-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [code-llama-70b-python-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q4_1 | 40.2GB |
| [code-llama-70b-python-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q5_0 | 44.2GB |
| [code-llama-70b-python-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [code-llama-70b-python-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q5_K | 45.41GB |
| [code-llama-70b-python-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [code-llama-70b-python-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q5_1 | 48.2GB |
| [code-llama-70b-python-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q6_K | 52.7GB |
| [code-llama-70b-python-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/NobodyExistsOnTheInternet_-_code-llama-70b-python-instruct-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
base_model:
- meta-llama/Llama-2-70b-hf
- codellama/CodeLlama-70b-Python-hf
- codellama/CodeLlama-70b-Instruct-hf
tags:
- mergekit
- merge
license: mit
---
# Codellama-python-instruct
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf)
* [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codellama/CodeLlama-70b-Python-hf
parameters:
density: 0.5
weight: 0.5
- model: codellama/CodeLlama-70b-Instruct-hf
parameters:
density: 0.5
weight: 1.0
merge_method: dare_ties
base_model: meta-llama/Llama-2-70b-hf
parameters:
# You can uncomment and set these parameters as needed
# normalize: false
# int8_mask: true
dtype: float16
```
|
mav23/Starcannon-Unleashed-12B-v1.0-GGUF | mav23 | 2024-11-06T19:05:19Z | 117 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:MarinaraSpaghetti/NemoMix-Unleashed-12B",
"base_model:merge:MarinaraSpaghetti/NemoMix-Unleashed-12B",
"base_model:nothingiisreal/MN-12B-Starcannon-v3",
"base_model:merge:nothingiisreal/MN-12B-Starcannon-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T17:09:46Z | ---
base_model:
- nothingiisreal/MN-12B-Starcannon-v3
- MarinaraSpaghetti/NemoMix-Unleashed-12B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---

Starcannon-Unleashed-12B-v1.0-GGUF
==================================
## Quantized
**GGUF:**
[VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF)
[mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF)
[bartowski/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/bartowski/Starcannon-Unleashed-12B-v1.0-GGUF)
HUGE THANKS TO [mradermacher](https://huggingface.co/mradermacher)!! ( ´•̥̥̥o•̥̥̥`)♡(˘̩̩̩̩̩̩ ⌂ ˘̩̩̩̩̩̩) Gosh dang, the fella is fast, I was shook! XD, and to the GOAT, the awesome [bartowski](https://huggingface.co/bartowski)! For their GGUF quantizations.
**EXL2:**
[8bpw](https://huggingface.co/Statuo/Starcannon-Unleashed-12b-EXL2-8bpw)
[6bpw](https://huggingface.co/Statuo/Starcannon-Unleashed-12b-EXL2-6bpw)
[4bpw](https://huggingface.co/Statuo/Starcannon-Unleashed-12b-EXL2-4bpw)
And, thanks to [Statuo](https://huggingface.co/Statuo) for providing EXL2 quants! (✿◕ᗜ◕)♡
I was only able to test the model using Q6_K with 24576 context at most due to PC limitations, so please let me know how it fared for you. Hopefully it still works well with higher context!
Recommended settings are here: [**Settings**](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0#instruct)
## Sample Output

## Introduction
**WARNING: Ramblings incoming. Please continue scrolling down if you wish to skip the boring part ʱªʱªʱª(ᕑᗢूᓫ∗)**
Ohh boi, here we are! I'm very happy to share with you the result of countless hours bashing my head on the wall! *:・゚✧(=ఠ్ఠܫఠ్ఠ =)∫
To start up, I want to put a disclaimer. This is the first time I'm attempting to merge a model and I'm in no way an expert when it comes to coding. AT ALL. I believe I didn't understand what on earth I was looking at for like 70% of the time... Err, so there's that! I did test this model out rigorously after executing the merging codes, and so far I loved the results. I was honestly expecting the merge to absolutely fail and be totally incoherent, but thankfully not! The two days of not getting enough sleep is worth it ◝(˃̣̣̥▽˂̣̣̥)/
My goal was to hopefully create something that will get the best parts from each finetune/merge, where one model can cover for the other's weak points.
I am a VERY huge fan of [Starcannon v3](https://huggingface.co/nothingiisreal/MN-12B-Starcannon-v3) because of how in character its responses are. It just hits different. It's like the model is the character itself, not ACTING as the character. That's why it always feels sad whenever it starts deteriorating, like I'm observing my beloved character die. No matter what adjustment I did to the context, it won't stay coherent to reach 16K context. On the other hand, I love [NemoMix Unleashed](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B) for its awesome stability at much longer contexts and its nature to progress the story forward even without prompting. It feels nice that it can stay coherent and stable even after reaching past the context size I set. I also find its ability to read between the lines great. So I figured, why not just marry the two to get the best of both worlds?
I would honestly love to do this again if I can because there's one too many times I found something I like in another model and then on another and wished so desperately they would just marry each other and have kids! XD
So please let me know how it fared for my first attempt!
I also want to learn how to finetune myself in addition to merging, but I don't think my PC is capable enough to endure it. I think it almost croaked on me when I did this merge, and my SDD cried, so maybe I'll just do it some other time when I have free time and more resources to spend.
And thus, I was finally able to merge my favorite models after hours of research, tutorials, asking annoying questions to the community (that no one replied to (´;︵;`)), and coding hell. Here we are!
**°˖✧It's all ABSOLUTELY worth it!✧˖°**
## Instruct
Both ChatML and Mistral should work fine. Personally, I tested this using ChatML. I found that I like the model's responses better when I use this format. Try to test it out and observe which one you like best. :D
## Settings
I recommend using these settings:
[Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0/blob/main/Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json)
**IMPORTANT: Open Silly Tavern and use "Master Import", which can be found under "A" tab — Advanced Formatting. Replace the "INSERT WORLD HERE" placeholders with the world/universe in which your character belongs to. If not applicable, just remove that part.**

**Check your User Settings and set "Example Messages Behavior" to "Never include examples", in order to prevent the Examples of Dialogue from getting sent two times in the context. People reported that if not set, this results in <|im_end|> tokens being outputted. Refer to this [post](https://www.reddit.com/r/SillyTavernAI/comments/1gft8dy/comment/luoah8g/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) for more info.**

Temperature 1.15 - 1.25 is good, but lower should also work well, as long as you also tweak the Min P and XTC to ensure the model won't choke. Play around with it to see what suits your taste.
This is a modified version of MarinaraSpaghetti's Mistral-Small-Correct.json, transformed into ChatML.
You can find the original version here: [MarinaraSpaghetti/SillyTavern-Settings](https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized)
## Tips
- Examples of Dialogue and First Message are very important. The model will copy the style you wrote in these sections. So for example, if you want short outputs, make Examples of Dialogue and First Message short, and if you want longer outputs, make sure your examples have full paragraphs, composed of several sentences.
- If your Examples of Dialogue and First Message are short/concise but the model still rambles, lower Temperature in small increments, but keep Min P and XTC as is first. Test the result and adjust them to your liking. If it still rambles make XTC Threshold higher.
- Utilize Author's Note In-chat @ Depth 2 as System if you want the instruction to have greater impact on the next response. If you want something exciting and spontaneous, you can try out this note I used when I tested out the model: "Scenario: Spontaneous. {{char}} has full autonomy to do anything they wish and progress the interaction in any way they like."
## Credits
A very huge thank you to [MarinaraSpaghetti](https://huggingface.co/MarinaraSpaghetti) and [Nothing is Real](https://huggingface.co/nothingiisreal)!! (灬^ω^灬)ノ~ ♡ (´。• ᵕ •。`) ♡
I really fell in love with your models and it inspired me to learn how to make this one, and boi was it worth it! °˖✧◝(TT▿TT)◜✧˖°
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the della_linear merge method using G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B as a base.
### Models Merged
The following models were included in the merge:
* G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
dtype: bfloat16
merge_method: della_linear
parameters:
epsilon: 0.05
int8_mask: 1.0
lambda: 1.0
slices:
- sources:
- layer_range: [0, 40]
model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
parameters:
density: 0.65
weight: 0.4
- layer_range: [0, 40]
model: G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
parameters:
density: 0.55
weight: 0.6
``` |
netcat420/MFANN3bv0.23 | netcat420 | 2024-11-06T18:59:30Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T16:21:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF | fartboner | 2024-11-06T18:49:47Z | 12 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:quantized:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-06T18:49:45Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- llama-cpp
- gguf-my-repo
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
base_model: sentence-transformers/all-MiniLM-L6-v2
---
# fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fartboner/all-MiniLM-L6-v2-Q4_K_M-GGUF --hf-file all-minilm-l6-v2-q4_k_m.gguf -c 2048
```
|
bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF | bartowski | 2024-11-06T18:47:37Z | 10,512 | 75 | null | [
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-06T17:50:33Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
---
## Llamacpp imatrix Quantizations of Qwen2.5.1-Coder-7B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## What's new:
New weights uploaded in place
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Qwen2.5.1-Coder-7B-Instruct-f16.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-f16.gguf) | f16 | 15.24GB | false | Full F16 weights. |
| [Qwen2.5.1-Coder-7B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Qwen2.5.1-Coder-7B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for must use cases, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.43GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.43GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.43GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Qwen2.5.1-Coder-7B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen2.5.1-Coder-7B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [Qwen2.5.1-Coder-7B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [Qwen2.5.1-Coder-7B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen2.5.1-Coder-7B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Qwen2.5.1-Coder-7B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [Qwen2.5.1-Coder-7B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen2.5.1-Coder-7B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [Qwen2.5.1-Coder-7B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5.1-Coder-7B-Instruct-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF --include "Qwen2.5.1-Coder-7B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF --include "Qwen2.5.1-Coder-7B-Instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Qwen2.5.1-Coder-7B-Instruct-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
viktoryes/bert-finetuned-ner | viktoryes | 2024-11-06T18:42:39Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T18:35:51Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.2
|
MayBashendy/ASAP_FineTuningBERT_Aug_k25_task1_organization_fold2 | MayBashendy | 2024-11-06T18:39:30Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T17:34:01Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k25_task1_organization_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k25_task1_organization_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5838
- Qwk: 0.6106
- Mse: 0.5838
- Rmse: 0.7640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0050 | 2 | 10.8733 | 0.0 | 10.8733 | 3.2975 |
| No log | 0.0100 | 4 | 10.1553 | 0.0 | 10.1553 | 3.1867 |
| No log | 0.0150 | 6 | 9.2079 | 0.0 | 9.2079 | 3.0345 |
| No log | 0.0201 | 8 | 7.6909 | 0.0039 | 7.6909 | 2.7733 |
| No log | 0.0251 | 10 | 6.1690 | 0.0 | 6.1690 | 2.4837 |
| No log | 0.0301 | 12 | 5.0600 | 0.0 | 5.0600 | 2.2495 |
| No log | 0.0351 | 14 | 3.8342 | 0.0638 | 3.8342 | 1.9581 |
| No log | 0.0401 | 16 | 3.0256 | 0.0137 | 3.0256 | 1.7394 |
| No log | 0.0451 | 18 | 2.0418 | 0.0039 | 2.0418 | 1.4289 |
| No log | 0.0501 | 20 | 1.4462 | 0.0238 | 1.4462 | 1.2026 |
| No log | 0.0551 | 22 | 1.0942 | 0.0703 | 1.0942 | 1.0460 |
| No log | 0.0602 | 24 | 0.9072 | 0.0345 | 0.9072 | 0.9525 |
| No log | 0.0652 | 26 | 0.7930 | 0.0107 | 0.7930 | 0.8905 |
| No log | 0.0702 | 28 | 0.7871 | 0.0107 | 0.7871 | 0.8872 |
| No log | 0.0752 | 30 | 0.7991 | 0.0 | 0.7991 | 0.8939 |
| No log | 0.0802 | 32 | 1.2109 | 0.0 | 1.2109 | 1.1004 |
| No log | 0.0852 | 34 | 1.1672 | 0.0 | 1.1672 | 1.0803 |
| No log | 0.0902 | 36 | 0.7922 | 0.0 | 0.7922 | 0.8900 |
| No log | 0.0952 | 38 | 0.7939 | 0.0174 | 0.7939 | 0.8910 |
| No log | 0.1003 | 40 | 0.7654 | 0.0107 | 0.7654 | 0.8749 |
| No log | 0.1053 | 42 | 0.7652 | 0.0 | 0.7652 | 0.8747 |
| No log | 0.1103 | 44 | 0.7911 | 0.0174 | 0.7911 | 0.8894 |
| No log | 0.1153 | 46 | 0.8239 | 0.0345 | 0.8239 | 0.9077 |
| No log | 0.1203 | 48 | 0.7759 | 0.0241 | 0.7759 | 0.8809 |
| No log | 0.1253 | 50 | 0.7405 | 0.0241 | 0.7405 | 0.8605 |
| No log | 0.1303 | 52 | 0.7589 | 0.0372 | 0.7589 | 0.8712 |
| No log | 0.1353 | 54 | 0.7325 | 0.0372 | 0.7325 | 0.8559 |
| No log | 0.1404 | 56 | 0.7133 | 0.0345 | 0.7133 | 0.8445 |
| No log | 0.1454 | 58 | 0.7419 | 0.2617 | 0.7419 | 0.8614 |
| No log | 0.1504 | 60 | 0.7218 | 0.1997 | 0.7218 | 0.8496 |
| No log | 0.1554 | 62 | 0.6995 | 0.0345 | 0.6995 | 0.8364 |
| No log | 0.1604 | 64 | 0.7506 | 0.0539 | 0.7506 | 0.8664 |
| No log | 0.1654 | 66 | 0.7464 | 0.0475 | 0.7464 | 0.8639 |
| No log | 0.1704 | 68 | 0.7236 | 0.0449 | 0.7236 | 0.8506 |
| No log | 0.1754 | 70 | 0.7181 | 0.0443 | 0.7181 | 0.8474 |
| No log | 0.1805 | 72 | 0.7335 | 0.0356 | 0.7335 | 0.8564 |
| No log | 0.1855 | 74 | 0.7263 | 0.0443 | 0.7263 | 0.8522 |
| No log | 0.1905 | 76 | 0.7263 | 0.0475 | 0.7263 | 0.8523 |
| No log | 0.1955 | 78 | 0.8467 | 0.1193 | 0.8467 | 0.9202 |
| No log | 0.2005 | 80 | 0.7613 | 0.1193 | 0.7613 | 0.8725 |
| No log | 0.2055 | 82 | 0.6640 | 0.1048 | 0.6640 | 0.8149 |
| No log | 0.2105 | 84 | 0.6364 | 0.1265 | 0.6364 | 0.7977 |
| No log | 0.2155 | 86 | 0.7049 | 0.1981 | 0.7049 | 0.8396 |
| No log | 0.2206 | 88 | 0.6111 | 0.1460 | 0.6111 | 0.7817 |
| No log | 0.2256 | 90 | 0.6047 | 0.2709 | 0.6047 | 0.7776 |
| No log | 0.2306 | 92 | 0.6234 | 0.0867 | 0.6234 | 0.7896 |
| No log | 0.2356 | 94 | 0.6245 | 0.1405 | 0.6245 | 0.7902 |
| No log | 0.2406 | 96 | 0.7139 | 0.2132 | 0.7139 | 0.8449 |
| No log | 0.2456 | 98 | 0.7129 | 0.2083 | 0.7129 | 0.8444 |
| No log | 0.2506 | 100 | 0.6223 | 0.1491 | 0.6223 | 0.7889 |
| No log | 0.2556 | 102 | 0.6026 | 0.1767 | 0.6026 | 0.7763 |
| No log | 0.2607 | 104 | 0.6241 | 0.2073 | 0.6241 | 0.7900 |
| No log | 0.2657 | 106 | 0.5722 | 0.2115 | 0.5722 | 0.7564 |
| No log | 0.2707 | 108 | 0.6125 | 0.3022 | 0.6125 | 0.7826 |
| No log | 0.2757 | 110 | 0.6971 | 0.0575 | 0.6971 | 0.8349 |
| No log | 0.2807 | 112 | 0.8042 | 0.0575 | 0.8042 | 0.8968 |
| No log | 0.2857 | 114 | 0.7376 | 0.0575 | 0.7376 | 0.8588 |
| No log | 0.2907 | 116 | 0.6602 | 0.1272 | 0.6602 | 0.8125 |
| No log | 0.2957 | 118 | 0.6541 | 0.2759 | 0.6541 | 0.8088 |
| No log | 0.3008 | 120 | 0.6764 | 0.0823 | 0.6764 | 0.8224 |
| No log | 0.3058 | 122 | 0.6903 | 0.1267 | 0.6903 | 0.8308 |
| No log | 0.3108 | 124 | 0.6391 | 0.1225 | 0.6391 | 0.7994 |
| No log | 0.3158 | 126 | 0.6187 | 0.1375 | 0.6187 | 0.7866 |
| No log | 0.3208 | 128 | 0.5873 | 0.3277 | 0.5873 | 0.7664 |
| No log | 0.3258 | 130 | 0.5633 | 0.3757 | 0.5633 | 0.7505 |
| No log | 0.3308 | 132 | 0.5560 | 0.3216 | 0.5560 | 0.7456 |
| No log | 0.3358 | 134 | 0.5551 | 0.4515 | 0.5551 | 0.7451 |
| No log | 0.3409 | 136 | 0.6150 | 0.4712 | 0.6150 | 0.7842 |
| No log | 0.3459 | 138 | 0.5958 | 0.4173 | 0.5958 | 0.7719 |
| No log | 0.3509 | 140 | 0.6142 | 0.3484 | 0.6142 | 0.7837 |
| No log | 0.3559 | 142 | 0.6605 | 0.4340 | 0.6605 | 0.8127 |
| No log | 0.3609 | 144 | 0.7271 | 0.4472 | 0.7271 | 0.8527 |
| No log | 0.3659 | 146 | 0.7140 | 0.4313 | 0.7140 | 0.8450 |
| No log | 0.3709 | 148 | 0.6328 | 0.3401 | 0.6328 | 0.7955 |
| No log | 0.3759 | 150 | 0.5699 | 0.2874 | 0.5699 | 0.7549 |
| No log | 0.3810 | 152 | 0.5638 | 0.3494 | 0.5638 | 0.7509 |
| No log | 0.3860 | 154 | 0.6352 | 0.4403 | 0.6352 | 0.7970 |
| No log | 0.3910 | 156 | 0.6795 | 0.4163 | 0.6795 | 0.8243 |
| No log | 0.3960 | 158 | 0.6123 | 0.4561 | 0.6123 | 0.7825 |
| No log | 0.4010 | 160 | 0.5606 | 0.3538 | 0.5606 | 0.7487 |
| No log | 0.4060 | 162 | 0.5583 | 0.3839 | 0.5583 | 0.7472 |
| No log | 0.4110 | 164 | 0.6124 | 0.4583 | 0.6124 | 0.7826 |
| No log | 0.4160 | 166 | 0.6710 | 0.4278 | 0.6710 | 0.8192 |
| No log | 0.4211 | 168 | 0.6012 | 0.4891 | 0.6012 | 0.7753 |
| No log | 0.4261 | 170 | 0.5562 | 0.3393 | 0.5562 | 0.7458 |
| No log | 0.4311 | 172 | 0.5601 | 0.2241 | 0.5601 | 0.7484 |
| No log | 0.4361 | 174 | 0.5467 | 0.3685 | 0.5467 | 0.7394 |
| No log | 0.4411 | 176 | 0.5761 | 0.4687 | 0.5761 | 0.7590 |
| No log | 0.4461 | 178 | 0.5629 | 0.4621 | 0.5629 | 0.7503 |
| No log | 0.4511 | 180 | 0.5299 | 0.3916 | 0.5299 | 0.7279 |
| No log | 0.4561 | 182 | 0.5921 | 0.2381 | 0.5921 | 0.7695 |
| No log | 0.4612 | 184 | 0.5615 | 0.2700 | 0.5615 | 0.7493 |
| No log | 0.4662 | 186 | 0.5452 | 0.4371 | 0.5452 | 0.7384 |
| No log | 0.4712 | 188 | 0.6596 | 0.4490 | 0.6596 | 0.8122 |
| No log | 0.4762 | 190 | 0.6738 | 0.4464 | 0.6738 | 0.8208 |
| No log | 0.4812 | 192 | 0.6228 | 0.4459 | 0.6228 | 0.7892 |
| No log | 0.4862 | 194 | 0.5572 | 0.4402 | 0.5572 | 0.7465 |
| No log | 0.4912 | 196 | 0.5356 | 0.4023 | 0.5356 | 0.7318 |
| No log | 0.4962 | 198 | 0.5261 | 0.4686 | 0.5261 | 0.7254 |
| No log | 0.5013 | 200 | 0.5300 | 0.4931 | 0.5300 | 0.7280 |
| No log | 0.5063 | 202 | 0.6108 | 0.5407 | 0.6108 | 0.7815 |
| No log | 0.5113 | 204 | 0.5554 | 0.5432 | 0.5554 | 0.7453 |
| No log | 0.5163 | 206 | 0.4690 | 0.5030 | 0.4690 | 0.6848 |
| No log | 0.5213 | 208 | 0.4794 | 0.4872 | 0.4794 | 0.6924 |
| No log | 0.5263 | 210 | 0.5447 | 0.4400 | 0.5447 | 0.7380 |
| No log | 0.5313 | 212 | 0.5817 | 0.4360 | 0.5817 | 0.7627 |
| No log | 0.5363 | 214 | 0.4918 | 0.4964 | 0.4918 | 0.7012 |
| No log | 0.5414 | 216 | 0.5011 | 0.4730 | 0.5011 | 0.7079 |
| No log | 0.5464 | 218 | 0.4949 | 0.4773 | 0.4949 | 0.7035 |
| No log | 0.5514 | 220 | 0.4679 | 0.5461 | 0.4679 | 0.6840 |
| No log | 0.5564 | 222 | 0.5397 | 0.5587 | 0.5397 | 0.7346 |
| No log | 0.5614 | 224 | 0.6017 | 0.4901 | 0.6017 | 0.7757 |
| No log | 0.5664 | 226 | 0.6441 | 0.2364 | 0.6441 | 0.8026 |
| No log | 0.5714 | 228 | 0.6377 | 0.1571 | 0.6377 | 0.7986 |
| No log | 0.5764 | 230 | 0.6369 | 0.1508 | 0.6369 | 0.7980 |
| No log | 0.5815 | 232 | 0.6548 | 0.2072 | 0.6548 | 0.8092 |
| No log | 0.5865 | 234 | 0.5604 | 0.4995 | 0.5604 | 0.7486 |
| No log | 0.5915 | 236 | 0.4619 | 0.4923 | 0.4619 | 0.6796 |
| No log | 0.5965 | 238 | 0.4412 | 0.5588 | 0.4412 | 0.6642 |
| No log | 0.6015 | 240 | 0.5240 | 0.5413 | 0.5240 | 0.7239 |
| No log | 0.6065 | 242 | 0.5629 | 0.5443 | 0.5629 | 0.7503 |
| No log | 0.6115 | 244 | 0.4687 | 0.5263 | 0.4687 | 0.6846 |
| No log | 0.6165 | 246 | 0.4727 | 0.4791 | 0.4727 | 0.6876 |
| No log | 0.6216 | 248 | 0.5476 | 0.5130 | 0.5476 | 0.7400 |
| No log | 0.6266 | 250 | 0.7945 | 0.4080 | 0.7945 | 0.8913 |
| No log | 0.6316 | 252 | 0.9281 | 0.3613 | 0.9281 | 0.9634 |
| No log | 0.6366 | 254 | 0.9152 | 0.4198 | 0.9152 | 0.9566 |
| No log | 0.6416 | 256 | 0.7389 | 0.4918 | 0.7389 | 0.8596 |
| No log | 0.6466 | 258 | 0.5585 | 0.5521 | 0.5585 | 0.7473 |
| No log | 0.6516 | 260 | 0.5382 | 0.5650 | 0.5382 | 0.7336 |
| No log | 0.6566 | 262 | 0.6351 | 0.5272 | 0.6351 | 0.7969 |
| No log | 0.6617 | 264 | 0.7908 | 0.4996 | 0.7908 | 0.8892 |
| No log | 0.6667 | 266 | 0.7008 | 0.4958 | 0.7008 | 0.8371 |
| No log | 0.6717 | 268 | 0.5496 | 0.4947 | 0.5496 | 0.7414 |
| No log | 0.6767 | 270 | 0.5346 | 0.4236 | 0.5346 | 0.7311 |
| No log | 0.6817 | 272 | 0.5382 | 0.4067 | 0.5382 | 0.7336 |
| No log | 0.6867 | 274 | 0.5214 | 0.4680 | 0.5214 | 0.7221 |
| No log | 0.6917 | 276 | 0.5135 | 0.5062 | 0.5135 | 0.7166 |
| No log | 0.6967 | 278 | 0.5106 | 0.5250 | 0.5106 | 0.7145 |
| No log | 0.7018 | 280 | 0.4806 | 0.4816 | 0.4806 | 0.6932 |
| No log | 0.7068 | 282 | 0.4702 | 0.4438 | 0.4702 | 0.6857 |
| No log | 0.7118 | 284 | 0.4708 | 0.4327 | 0.4708 | 0.6862 |
| No log | 0.7168 | 286 | 0.4623 | 0.4583 | 0.4623 | 0.6799 |
| No log | 0.7218 | 288 | 0.4645 | 0.5214 | 0.4645 | 0.6815 |
| No log | 0.7268 | 290 | 0.5278 | 0.5662 | 0.5278 | 0.7265 |
| No log | 0.7318 | 292 | 0.5359 | 0.5643 | 0.5359 | 0.7321 |
| No log | 0.7368 | 294 | 0.5511 | 0.5613 | 0.5511 | 0.7424 |
| No log | 0.7419 | 296 | 0.5864 | 0.5650 | 0.5864 | 0.7658 |
| No log | 0.7469 | 298 | 0.5172 | 0.5814 | 0.5172 | 0.7192 |
| No log | 0.7519 | 300 | 0.4118 | 0.5532 | 0.4118 | 0.6417 |
| No log | 0.7569 | 302 | 0.4289 | 0.5068 | 0.4289 | 0.6549 |
| No log | 0.7619 | 304 | 0.4135 | 0.5424 | 0.4135 | 0.6431 |
| No log | 0.7669 | 306 | 0.5126 | 0.5652 | 0.5126 | 0.7160 |
| No log | 0.7719 | 308 | 0.6338 | 0.5421 | 0.6338 | 0.7961 |
| No log | 0.7769 | 310 | 0.5446 | 0.5504 | 0.5446 | 0.7380 |
| No log | 0.7820 | 312 | 0.4251 | 0.5462 | 0.4251 | 0.6520 |
| No log | 0.7870 | 314 | 0.4381 | 0.4806 | 0.4381 | 0.6619 |
| No log | 0.7920 | 316 | 0.4345 | 0.4995 | 0.4345 | 0.6591 |
| No log | 0.7970 | 318 | 0.4291 | 0.5660 | 0.4291 | 0.6550 |
| No log | 0.8020 | 320 | 0.5193 | 0.5754 | 0.5193 | 0.7207 |
| No log | 0.8070 | 322 | 0.5049 | 0.5769 | 0.5049 | 0.7106 |
| No log | 0.8120 | 324 | 0.4388 | 0.5743 | 0.4388 | 0.6624 |
| No log | 0.8170 | 326 | 0.4333 | 0.5723 | 0.4333 | 0.6583 |
| No log | 0.8221 | 328 | 0.4290 | 0.5620 | 0.4290 | 0.6550 |
| No log | 0.8271 | 330 | 0.4357 | 0.5675 | 0.4357 | 0.6600 |
| No log | 0.8321 | 332 | 0.4959 | 0.5756 | 0.4959 | 0.7042 |
| No log | 0.8371 | 334 | 0.5154 | 0.5544 | 0.5154 | 0.7179 |
| No log | 0.8421 | 336 | 0.4459 | 0.5607 | 0.4459 | 0.6677 |
| No log | 0.8471 | 338 | 0.4278 | 0.5778 | 0.4278 | 0.6541 |
| No log | 0.8521 | 340 | 0.4239 | 0.5474 | 0.4239 | 0.6511 |
| No log | 0.8571 | 342 | 0.4185 | 0.5436 | 0.4185 | 0.6469 |
| No log | 0.8622 | 344 | 0.4301 | 0.5791 | 0.4301 | 0.6558 |
| No log | 0.8672 | 346 | 0.4662 | 0.5736 | 0.4662 | 0.6828 |
| No log | 0.8722 | 348 | 0.5727 | 0.5639 | 0.5727 | 0.7567 |
| No log | 0.8772 | 350 | 0.5116 | 0.5576 | 0.5116 | 0.7152 |
| No log | 0.8822 | 352 | 0.4919 | 0.5232 | 0.4919 | 0.7014 |
| No log | 0.8872 | 354 | 0.5162 | 0.5348 | 0.5162 | 0.7185 |
| No log | 0.8922 | 356 | 0.4872 | 0.5275 | 0.4872 | 0.6980 |
| No log | 0.8972 | 358 | 0.4745 | 0.5229 | 0.4745 | 0.6888 |
| No log | 0.9023 | 360 | 0.4812 | 0.5090 | 0.4812 | 0.6937 |
| No log | 0.9073 | 362 | 0.4683 | 0.4678 | 0.4683 | 0.6843 |
| No log | 0.9123 | 364 | 0.4641 | 0.4018 | 0.4641 | 0.6813 |
| No log | 0.9173 | 366 | 0.5020 | 0.3674 | 0.5020 | 0.7085 |
| No log | 0.9223 | 368 | 0.5030 | 0.3811 | 0.5030 | 0.7092 |
| No log | 0.9273 | 370 | 0.4522 | 0.4696 | 0.4522 | 0.6724 |
| No log | 0.9323 | 372 | 0.4859 | 0.5393 | 0.4859 | 0.6970 |
| No log | 0.9373 | 374 | 0.4815 | 0.5164 | 0.4815 | 0.6939 |
| No log | 0.9424 | 376 | 0.4638 | 0.4297 | 0.4638 | 0.6810 |
| No log | 0.9474 | 378 | 0.4803 | 0.4401 | 0.4803 | 0.6930 |
| No log | 0.9524 | 380 | 0.5758 | 0.4879 | 0.5758 | 0.7588 |
| No log | 0.9574 | 382 | 0.8233 | 0.4769 | 0.8233 | 0.9073 |
| No log | 0.9624 | 384 | 0.7776 | 0.4852 | 0.7776 | 0.8818 |
| No log | 0.9674 | 386 | 0.5953 | 0.4637 | 0.5953 | 0.7716 |
| No log | 0.9724 | 388 | 0.5898 | 0.4701 | 0.5898 | 0.7680 |
| No log | 0.9774 | 390 | 0.6605 | 0.4632 | 0.6605 | 0.8127 |
| No log | 0.9825 | 392 | 0.6187 | 0.4816 | 0.6187 | 0.7866 |
| No log | 0.9875 | 394 | 0.5069 | 0.4067 | 0.5069 | 0.7120 |
| No log | 0.9925 | 396 | 0.4954 | 0.4028 | 0.4954 | 0.7038 |
| No log | 0.9975 | 398 | 0.4975 | 0.3837 | 0.4975 | 0.7053 |
| No log | 1.0025 | 400 | 0.4821 | 0.4292 | 0.4821 | 0.6944 |
| No log | 1.0075 | 402 | 0.5886 | 0.5332 | 0.5886 | 0.7672 |
| No log | 1.0125 | 404 | 0.5745 | 0.5157 | 0.5745 | 0.7580 |
| No log | 1.0175 | 406 | 0.4698 | 0.4666 | 0.4698 | 0.6854 |
| No log | 1.0226 | 408 | 0.5246 | 0.3662 | 0.5246 | 0.7243 |
| No log | 1.0276 | 410 | 0.5383 | 0.3574 | 0.5383 | 0.7337 |
| No log | 1.0326 | 412 | 0.4645 | 0.4372 | 0.4645 | 0.6815 |
| No log | 1.0376 | 414 | 0.4988 | 0.5624 | 0.4988 | 0.7063 |
| No log | 1.0426 | 416 | 0.6110 | 0.5717 | 0.6110 | 0.7817 |
| No log | 1.0476 | 418 | 0.5429 | 0.5949 | 0.5429 | 0.7368 |
| No log | 1.0526 | 420 | 0.4471 | 0.4992 | 0.4471 | 0.6686 |
| No log | 1.0576 | 422 | 0.4548 | 0.5074 | 0.4548 | 0.6744 |
| No log | 1.0627 | 424 | 0.4772 | 0.5309 | 0.4772 | 0.6908 |
| No log | 1.0677 | 426 | 0.6271 | 0.5488 | 0.6271 | 0.7919 |
| No log | 1.0727 | 428 | 0.7450 | 0.5354 | 0.7450 | 0.8631 |
| No log | 1.0777 | 430 | 0.7295 | 0.5143 | 0.7295 | 0.8541 |
| No log | 1.0827 | 432 | 0.5681 | 0.5364 | 0.5681 | 0.7537 |
| No log | 1.0877 | 434 | 0.4187 | 0.5224 | 0.4187 | 0.6471 |
| No log | 1.0927 | 436 | 0.4103 | 0.5162 | 0.4103 | 0.6405 |
| No log | 1.0977 | 438 | 0.4288 | 0.5692 | 0.4288 | 0.6549 |
| No log | 1.1028 | 440 | 0.5248 | 0.6107 | 0.5248 | 0.7244 |
| No log | 1.1078 | 442 | 0.5222 | 0.6327 | 0.5222 | 0.7226 |
| No log | 1.1128 | 444 | 0.4314 | 0.5593 | 0.4314 | 0.6568 |
| No log | 1.1178 | 446 | 0.4246 | 0.4988 | 0.4246 | 0.6516 |
| No log | 1.1228 | 448 | 0.4229 | 0.5010 | 0.4229 | 0.6503 |
| No log | 1.1278 | 450 | 0.4505 | 0.5761 | 0.4505 | 0.6712 |
| No log | 1.1328 | 452 | 0.5725 | 0.5673 | 0.5725 | 0.7566 |
| No log | 1.1378 | 454 | 0.5486 | 0.5641 | 0.5486 | 0.7406 |
| No log | 1.1429 | 456 | 0.4562 | 0.5517 | 0.4562 | 0.6754 |
| No log | 1.1479 | 458 | 0.4540 | 0.5081 | 0.4540 | 0.6738 |
| No log | 1.1529 | 460 | 0.4476 | 0.5217 | 0.4476 | 0.6690 |
| No log | 1.1579 | 462 | 0.4523 | 0.5598 | 0.4523 | 0.6726 |
| No log | 1.1629 | 464 | 0.4848 | 0.5703 | 0.4848 | 0.6963 |
| No log | 1.1679 | 466 | 0.4640 | 0.5829 | 0.4640 | 0.6812 |
| No log | 1.1729 | 468 | 0.4315 | 0.5608 | 0.4315 | 0.6569 |
| No log | 1.1779 | 470 | 0.4715 | 0.5847 | 0.4715 | 0.6867 |
| No log | 1.1830 | 472 | 0.4666 | 0.6121 | 0.4666 | 0.6831 |
| No log | 1.1880 | 474 | 0.5071 | 0.6437 | 0.5071 | 0.7121 |
| No log | 1.1930 | 476 | 0.5649 | 0.6530 | 0.5649 | 0.7516 |
| No log | 1.1980 | 478 | 0.4663 | 0.6425 | 0.4663 | 0.6828 |
| No log | 1.2030 | 480 | 0.4229 | 0.6013 | 0.4229 | 0.6503 |
| No log | 1.2080 | 482 | 0.4819 | 0.6334 | 0.4819 | 0.6942 |
| No log | 1.2130 | 484 | 0.6275 | 0.6499 | 0.6275 | 0.7922 |
| No log | 1.2180 | 486 | 0.8328 | 0.5850 | 0.8328 | 0.9126 |
| No log | 1.2231 | 488 | 1.1126 | 0.5378 | 1.1126 | 1.0548 |
| No log | 1.2281 | 490 | 1.0108 | 0.4583 | 1.0108 | 1.0054 |
| No log | 1.2331 | 492 | 0.8469 | 0.4070 | 0.8469 | 0.9203 |
| No log | 1.2381 | 494 | 0.8322 | 0.3959 | 0.8322 | 0.9123 |
| No log | 1.2431 | 496 | 0.7351 | 0.4257 | 0.7351 | 0.8574 |
| No log | 1.2481 | 498 | 0.6612 | 0.4266 | 0.6612 | 0.8131 |
| 0.5571 | 1.2531 | 500 | 0.7408 | 0.4322 | 0.7408 | 0.8607 |
| 0.5571 | 1.2581 | 502 | 0.9653 | 0.4607 | 0.9653 | 0.9825 |
| 0.5571 | 1.2632 | 504 | 0.9859 | 0.4712 | 0.9859 | 0.9929 |
| 0.5571 | 1.2682 | 506 | 0.7558 | 0.5261 | 0.7558 | 0.8694 |
| 0.5571 | 1.2732 | 508 | 0.6530 | 0.5094 | 0.6530 | 0.8081 |
| 0.5571 | 1.2782 | 510 | 0.5411 | 0.4601 | 0.5411 | 0.7356 |
| 0.5571 | 1.2832 | 512 | 0.5155 | 0.4838 | 0.5155 | 0.7180 |
| 0.5571 | 1.2882 | 514 | 0.5624 | 0.5800 | 0.5624 | 0.7499 |
| 0.5571 | 1.2932 | 516 | 0.5132 | 0.5860 | 0.5132 | 0.7164 |
| 0.5571 | 1.2982 | 518 | 0.4442 | 0.5214 | 0.4442 | 0.6665 |
| 0.5571 | 1.3033 | 520 | 0.4533 | 0.5778 | 0.4533 | 0.6733 |
| 0.5571 | 1.3083 | 522 | 0.4693 | 0.6182 | 0.4693 | 0.6851 |
| 0.5571 | 1.3133 | 524 | 0.4479 | 0.6018 | 0.4479 | 0.6693 |
| 0.5571 | 1.3183 | 526 | 0.4317 | 0.5600 | 0.4317 | 0.6571 |
| 0.5571 | 1.3233 | 528 | 0.4464 | 0.5981 | 0.4464 | 0.6681 |
| 0.5571 | 1.3283 | 530 | 0.4336 | 0.5530 | 0.4336 | 0.6585 |
| 0.5571 | 1.3333 | 532 | 0.4345 | 0.4779 | 0.4345 | 0.6592 |
| 0.5571 | 1.3383 | 534 | 0.4366 | 0.5190 | 0.4366 | 0.6607 |
| 0.5571 | 1.3434 | 536 | 0.4557 | 0.5411 | 0.4557 | 0.6751 |
| 0.5571 | 1.3484 | 538 | 0.4994 | 0.5941 | 0.4994 | 0.7067 |
| 0.5571 | 1.3534 | 540 | 0.4581 | 0.5362 | 0.4581 | 0.6768 |
| 0.5571 | 1.3584 | 542 | 0.4510 | 0.4483 | 0.4510 | 0.6716 |
| 0.5571 | 1.3634 | 544 | 0.4550 | 0.4952 | 0.4550 | 0.6745 |
| 0.5571 | 1.3684 | 546 | 0.5593 | 0.5958 | 0.5593 | 0.7479 |
| 0.5571 | 1.3734 | 548 | 0.6351 | 0.5932 | 0.6351 | 0.7969 |
| 0.5571 | 1.3784 | 550 | 0.5340 | 0.5502 | 0.5340 | 0.7308 |
| 0.5571 | 1.3835 | 552 | 0.4765 | 0.4720 | 0.4765 | 0.6903 |
| 0.5571 | 1.3885 | 554 | 0.4833 | 0.4739 | 0.4833 | 0.6952 |
| 0.5571 | 1.3935 | 556 | 0.5641 | 0.5317 | 0.5641 | 0.7511 |
| 0.5571 | 1.3985 | 558 | 0.6123 | 0.5462 | 0.6123 | 0.7825 |
| 0.5571 | 1.4035 | 560 | 0.6073 | 0.5520 | 0.6073 | 0.7793 |
| 0.5571 | 1.4085 | 562 | 0.5448 | 0.5377 | 0.5448 | 0.7381 |
| 0.5571 | 1.4135 | 564 | 0.5548 | 0.5812 | 0.5548 | 0.7449 |
| 0.5571 | 1.4185 | 566 | 0.5482 | 0.5941 | 0.5482 | 0.7404 |
| 0.5571 | 1.4236 | 568 | 0.4663 | 0.5756 | 0.4663 | 0.6829 |
| 0.5571 | 1.4286 | 570 | 0.4658 | 0.5766 | 0.4658 | 0.6825 |
| 0.5571 | 1.4336 | 572 | 0.5565 | 0.6095 | 0.5565 | 0.7460 |
| 0.5571 | 1.4386 | 574 | 0.5923 | 0.6191 | 0.5923 | 0.7696 |
| 0.5571 | 1.4436 | 576 | 0.5375 | 0.6046 | 0.5375 | 0.7332 |
| 0.5571 | 1.4486 | 578 | 0.5426 | 0.6063 | 0.5426 | 0.7366 |
| 0.5571 | 1.4536 | 580 | 0.6643 | 0.6052 | 0.6643 | 0.8150 |
| 0.5571 | 1.4586 | 582 | 0.7432 | 0.6152 | 0.7432 | 0.8621 |
| 0.5571 | 1.4637 | 584 | 0.6486 | 0.6084 | 0.6486 | 0.8053 |
| 0.5571 | 1.4687 | 586 | 0.5750 | 0.5936 | 0.5750 | 0.7583 |
| 0.5571 | 1.4737 | 588 | 0.6248 | 0.6225 | 0.6248 | 0.7904 |
| 0.5571 | 1.4787 | 590 | 0.7837 | 0.6194 | 0.7837 | 0.8853 |
| 0.5571 | 1.4837 | 592 | 0.6825 | 0.6183 | 0.6825 | 0.8261 |
| 0.5571 | 1.4887 | 594 | 0.5697 | 0.5912 | 0.5697 | 0.7548 |
| 0.5571 | 1.4937 | 596 | 0.4908 | 0.5764 | 0.4908 | 0.7005 |
| 0.5571 | 1.4987 | 598 | 0.4400 | 0.5336 | 0.4400 | 0.6633 |
| 0.5571 | 1.5038 | 600 | 0.4405 | 0.5190 | 0.4405 | 0.6637 |
| 0.5571 | 1.5088 | 602 | 0.4546 | 0.5776 | 0.4546 | 0.6742 |
| 0.5571 | 1.5138 | 604 | 0.4669 | 0.5846 | 0.4669 | 0.6833 |
| 0.5571 | 1.5188 | 606 | 0.4466 | 0.5140 | 0.4466 | 0.6683 |
| 0.5571 | 1.5238 | 608 | 0.5130 | 0.4114 | 0.5130 | 0.7162 |
| 0.5571 | 1.5288 | 610 | 0.4869 | 0.4363 | 0.4869 | 0.6978 |
| 0.5571 | 1.5338 | 612 | 0.4595 | 0.5277 | 0.4595 | 0.6778 |
| 0.5571 | 1.5388 | 614 | 0.6341 | 0.5978 | 0.6341 | 0.7963 |
| 0.5571 | 1.5439 | 616 | 0.6829 | 0.6088 | 0.6829 | 0.8264 |
| 0.5571 | 1.5489 | 618 | 0.5427 | 0.5811 | 0.5427 | 0.7367 |
| 0.5571 | 1.5539 | 620 | 0.4607 | 0.5182 | 0.4607 | 0.6787 |
| 0.5571 | 1.5589 | 622 | 0.4484 | 0.5262 | 0.4484 | 0.6696 |
| 0.5571 | 1.5639 | 624 | 0.4379 | 0.5342 | 0.4379 | 0.6617 |
| 0.5571 | 1.5689 | 626 | 0.4323 | 0.5543 | 0.4323 | 0.6575 |
| 0.5571 | 1.5739 | 628 | 0.4253 | 0.5395 | 0.4253 | 0.6522 |
| 0.5571 | 1.5789 | 630 | 0.4382 | 0.5974 | 0.4382 | 0.6619 |
| 0.5571 | 1.5840 | 632 | 0.4724 | 0.6324 | 0.4724 | 0.6873 |
| 0.5571 | 1.5890 | 634 | 0.4826 | 0.6449 | 0.4826 | 0.6947 |
| 0.5571 | 1.5940 | 636 | 0.4444 | 0.6053 | 0.4444 | 0.6666 |
| 0.5571 | 1.5990 | 638 | 0.4351 | 0.6024 | 0.4351 | 0.6596 |
| 0.5571 | 1.6040 | 640 | 0.4372 | 0.6130 | 0.4372 | 0.6612 |
| 0.5571 | 1.6090 | 642 | 0.4975 | 0.6316 | 0.4975 | 0.7054 |
| 0.5571 | 1.6140 | 644 | 0.5078 | 0.6302 | 0.5078 | 0.7126 |
| 0.5571 | 1.6190 | 646 | 0.4606 | 0.6164 | 0.4606 | 0.6787 |
| 0.5571 | 1.6241 | 648 | 0.5190 | 0.6200 | 0.5190 | 0.7204 |
| 0.5571 | 1.6291 | 650 | 0.6213 | 0.6241 | 0.6213 | 0.7882 |
| 0.5571 | 1.6341 | 652 | 0.6215 | 0.6501 | 0.6215 | 0.7884 |
| 0.5571 | 1.6391 | 654 | 0.5520 | 0.6384 | 0.5520 | 0.7430 |
| 0.5571 | 1.6441 | 656 | 0.5224 | 0.6364 | 0.5224 | 0.7228 |
| 0.5571 | 1.6491 | 658 | 0.5731 | 0.6744 | 0.5731 | 0.7570 |
| 0.5571 | 1.6541 | 660 | 0.6801 | 0.6964 | 0.6801 | 0.8247 |
| 0.5571 | 1.6591 | 662 | 0.6533 | 0.7074 | 0.6533 | 0.8083 |
| 0.5571 | 1.6642 | 664 | 0.5543 | 0.6756 | 0.5543 | 0.7445 |
| 0.5571 | 1.6692 | 666 | 0.4179 | 0.6019 | 0.4179 | 0.6465 |
| 0.5571 | 1.6742 | 668 | 0.4017 | 0.5702 | 0.4017 | 0.6338 |
| 0.5571 | 1.6792 | 670 | 0.4231 | 0.6162 | 0.4231 | 0.6505 |
| 0.5571 | 1.6842 | 672 | 0.5368 | 0.6568 | 0.5368 | 0.7327 |
| 0.5571 | 1.6892 | 674 | 0.5521 | 0.6643 | 0.5521 | 0.7430 |
| 0.5571 | 1.6942 | 676 | 0.4464 | 0.6122 | 0.4464 | 0.6681 |
| 0.5571 | 1.6992 | 678 | 0.4184 | 0.5952 | 0.4184 | 0.6468 |
| 0.5571 | 1.7043 | 680 | 0.4864 | 0.6324 | 0.4864 | 0.6974 |
| 0.5571 | 1.7093 | 682 | 0.6196 | 0.6727 | 0.6196 | 0.7872 |
| 0.5571 | 1.7143 | 684 | 0.5929 | 0.6712 | 0.5929 | 0.7700 |
| 0.5571 | 1.7193 | 686 | 0.5315 | 0.6435 | 0.5315 | 0.7291 |
| 0.5571 | 1.7243 | 688 | 0.4502 | 0.5862 | 0.4502 | 0.6710 |
| 0.5571 | 1.7293 | 690 | 0.4466 | 0.5904 | 0.4466 | 0.6683 |
| 0.5571 | 1.7343 | 692 | 0.4680 | 0.6004 | 0.4680 | 0.6841 |
| 0.5571 | 1.7393 | 694 | 0.4699 | 0.5864 | 0.4699 | 0.6855 |
| 0.5571 | 1.7444 | 696 | 0.4380 | 0.5804 | 0.4380 | 0.6618 |
| 0.5571 | 1.7494 | 698 | 0.4475 | 0.6051 | 0.4475 | 0.6690 |
| 0.5571 | 1.7544 | 700 | 0.4307 | 0.5766 | 0.4307 | 0.6563 |
| 0.5571 | 1.7594 | 702 | 0.4258 | 0.5444 | 0.4258 | 0.6525 |
| 0.5571 | 1.7644 | 704 | 0.4196 | 0.5699 | 0.4196 | 0.6478 |
| 0.5571 | 1.7694 | 706 | 0.4748 | 0.6399 | 0.4748 | 0.6891 |
| 0.5571 | 1.7744 | 708 | 0.5012 | 0.6434 | 0.5012 | 0.7079 |
| 0.5571 | 1.7794 | 710 | 0.4461 | 0.5887 | 0.4461 | 0.6679 |
| 0.5571 | 1.7845 | 712 | 0.4358 | 0.5846 | 0.4358 | 0.6602 |
| 0.5571 | 1.7895 | 714 | 0.4710 | 0.6148 | 0.4710 | 0.6863 |
| 0.5571 | 1.7945 | 716 | 0.5778 | 0.6412 | 0.5778 | 0.7601 |
| 0.5571 | 1.7995 | 718 | 0.5850 | 0.6509 | 0.5850 | 0.7648 |
| 0.5571 | 1.8045 | 720 | 0.5514 | 0.6328 | 0.5514 | 0.7426 |
| 0.5571 | 1.8095 | 722 | 0.5716 | 0.6378 | 0.5716 | 0.7560 |
| 0.5571 | 1.8145 | 724 | 0.5138 | 0.6308 | 0.5138 | 0.7168 |
| 0.5571 | 1.8195 | 726 | 0.5560 | 0.6329 | 0.5560 | 0.7456 |
| 0.5571 | 1.8246 | 728 | 0.7560 | 0.6487 | 0.7560 | 0.8695 |
| 0.5571 | 1.8296 | 730 | 0.9609 | 0.6486 | 0.9609 | 0.9803 |
| 0.5571 | 1.8346 | 732 | 0.9759 | 0.6408 | 0.9759 | 0.9879 |
| 0.5571 | 1.8396 | 734 | 0.7125 | 0.6358 | 0.7125 | 0.8441 |
| 0.5571 | 1.8446 | 736 | 0.5211 | 0.5805 | 0.5211 | 0.7218 |
| 0.5571 | 1.8496 | 738 | 0.5129 | 0.5315 | 0.5129 | 0.7161 |
| 0.5571 | 1.8546 | 740 | 0.6293 | 0.5576 | 0.6293 | 0.7933 |
| 0.5571 | 1.8596 | 742 | 0.6748 | 0.5725 | 0.6748 | 0.8214 |
| 0.5571 | 1.8647 | 744 | 0.5562 | 0.5489 | 0.5562 | 0.7458 |
| 0.5571 | 1.8697 | 746 | 0.4806 | 0.4928 | 0.4806 | 0.6933 |
| 0.5571 | 1.8747 | 748 | 0.4776 | 0.4879 | 0.4776 | 0.6911 |
| 0.5571 | 1.8797 | 750 | 0.5436 | 0.5619 | 0.5436 | 0.7373 |
| 0.5571 | 1.8847 | 752 | 0.5897 | 0.5820 | 0.5897 | 0.7679 |
| 0.5571 | 1.8897 | 754 | 0.5117 | 0.5613 | 0.5117 | 0.7153 |
| 0.5571 | 1.8947 | 756 | 0.4801 | 0.5058 | 0.4801 | 0.6929 |
| 0.5571 | 1.8997 | 758 | 0.5010 | 0.5588 | 0.5010 | 0.7078 |
| 0.5571 | 1.9048 | 760 | 0.5344 | 0.5967 | 0.5344 | 0.7310 |
| 0.5571 | 1.9098 | 762 | 0.5272 | 0.5983 | 0.5272 | 0.7261 |
| 0.5571 | 1.9148 | 764 | 0.4507 | 0.5097 | 0.4507 | 0.6714 |
| 0.5571 | 1.9198 | 766 | 0.4384 | 0.4962 | 0.4384 | 0.6621 |
| 0.5571 | 1.9248 | 768 | 0.4393 | 0.5624 | 0.4393 | 0.6628 |
| 0.5571 | 1.9298 | 770 | 0.4908 | 0.6264 | 0.4908 | 0.7006 |
| 0.5571 | 1.9348 | 772 | 0.4441 | 0.6067 | 0.4441 | 0.6664 |
| 0.5571 | 1.9398 | 774 | 0.4142 | 0.5465 | 0.4142 | 0.6436 |
| 0.5571 | 1.9449 | 776 | 0.4146 | 0.5412 | 0.4146 | 0.6439 |
| 0.5571 | 1.9499 | 778 | 0.4178 | 0.5627 | 0.4178 | 0.6464 |
| 0.5571 | 1.9549 | 780 | 0.4266 | 0.5878 | 0.4266 | 0.6531 |
| 0.5571 | 1.9599 | 782 | 0.4221 | 0.5621 | 0.4221 | 0.6497 |
| 0.5571 | 1.9649 | 784 | 0.4331 | 0.5819 | 0.4331 | 0.6581 |
| 0.5571 | 1.9699 | 786 | 0.4728 | 0.6237 | 0.4728 | 0.6876 |
| 0.5571 | 1.9749 | 788 | 0.4944 | 0.6426 | 0.4944 | 0.7031 |
| 0.5571 | 1.9799 | 790 | 0.4526 | 0.5940 | 0.4526 | 0.6727 |
| 0.5571 | 1.9850 | 792 | 0.4235 | 0.5192 | 0.4235 | 0.6508 |
| 0.5571 | 1.9900 | 794 | 0.4330 | 0.5017 | 0.4330 | 0.6580 |
| 0.5571 | 1.9950 | 796 | 0.4236 | 0.5305 | 0.4236 | 0.6509 |
| 0.5571 | 2.0 | 798 | 0.4616 | 0.5975 | 0.4616 | 0.6794 |
| 0.5571 | 2.0050 | 800 | 0.4668 | 0.5983 | 0.4668 | 0.6832 |
| 0.5571 | 2.0100 | 802 | 0.4351 | 0.5663 | 0.4351 | 0.6596 |
| 0.5571 | 2.0150 | 804 | 0.4784 | 0.6276 | 0.4784 | 0.6916 |
| 0.5571 | 2.0201 | 806 | 0.5037 | 0.6331 | 0.5037 | 0.7097 |
| 0.5571 | 2.0251 | 808 | 0.4571 | 0.5853 | 0.4571 | 0.6761 |
| 0.5571 | 2.0301 | 810 | 0.4672 | 0.6037 | 0.4672 | 0.6835 |
| 0.5571 | 2.0351 | 812 | 0.5475 | 0.6581 | 0.5475 | 0.7400 |
| 0.5571 | 2.0401 | 814 | 0.5924 | 0.6618 | 0.5924 | 0.7697 |
| 0.5571 | 2.0451 | 816 | 0.5604 | 0.6405 | 0.5604 | 0.7486 |
| 0.5571 | 2.0501 | 818 | 0.5110 | 0.5976 | 0.5110 | 0.7148 |
| 0.5571 | 2.0551 | 820 | 0.5699 | 0.6294 | 0.5699 | 0.7549 |
| 0.5571 | 2.0602 | 822 | 0.5817 | 0.6288 | 0.5817 | 0.7627 |
| 0.5571 | 2.0652 | 824 | 0.4996 | 0.5922 | 0.4996 | 0.7069 |
| 0.5571 | 2.0702 | 826 | 0.4440 | 0.5401 | 0.4440 | 0.6663 |
| 0.5571 | 2.0752 | 828 | 0.4615 | 0.5903 | 0.4615 | 0.6793 |
| 0.5571 | 2.0802 | 830 | 0.4692 | 0.6069 | 0.4692 | 0.6850 |
| 0.5571 | 2.0852 | 832 | 0.4298 | 0.5554 | 0.4298 | 0.6556 |
| 0.5571 | 2.0902 | 834 | 0.4304 | 0.5672 | 0.4304 | 0.6561 |
| 0.5571 | 2.0952 | 836 | 0.5049 | 0.6301 | 0.5049 | 0.7105 |
| 0.5571 | 2.1003 | 838 | 0.5158 | 0.6337 | 0.5158 | 0.7182 |
| 0.5571 | 2.1053 | 840 | 0.4419 | 0.5767 | 0.4419 | 0.6647 |
| 0.5571 | 2.1103 | 842 | 0.4329 | 0.5600 | 0.4329 | 0.6580 |
| 0.5571 | 2.1153 | 844 | 0.4654 | 0.6179 | 0.4654 | 0.6822 |
| 0.5571 | 2.1203 | 846 | 0.6013 | 0.6654 | 0.6013 | 0.7755 |
| 0.5571 | 2.1253 | 848 | 0.5630 | 0.6567 | 0.5630 | 0.7503 |
| 0.5571 | 2.1303 | 850 | 0.5085 | 0.6395 | 0.5085 | 0.7131 |
| 0.5571 | 2.1353 | 852 | 0.4491 | 0.5956 | 0.4491 | 0.6702 |
| 0.5571 | 2.1404 | 854 | 0.4729 | 0.6282 | 0.4729 | 0.6877 |
| 0.5571 | 2.1454 | 856 | 0.5654 | 0.6561 | 0.5654 | 0.7519 |
| 0.5571 | 2.1504 | 858 | 0.6594 | 0.6728 | 0.6594 | 0.8120 |
| 0.5571 | 2.1554 | 860 | 0.5545 | 0.6536 | 0.5545 | 0.7446 |
| 0.5571 | 2.1604 | 862 | 0.4411 | 0.5923 | 0.4411 | 0.6641 |
| 0.5571 | 2.1654 | 864 | 0.4523 | 0.6030 | 0.4523 | 0.6726 |
| 0.5571 | 2.1704 | 866 | 0.6010 | 0.6394 | 0.6010 | 0.7752 |
| 0.5571 | 2.1754 | 868 | 0.7629 | 0.6542 | 0.7629 | 0.8734 |
| 0.5571 | 2.1805 | 870 | 0.7774 | 0.6315 | 0.7774 | 0.8817 |
| 0.5571 | 2.1855 | 872 | 0.6239 | 0.6082 | 0.6239 | 0.7899 |
| 0.5571 | 2.1905 | 874 | 0.5677 | 0.5612 | 0.5677 | 0.7534 |
| 0.5571 | 2.1955 | 876 | 0.5746 | 0.5653 | 0.5746 | 0.7580 |
| 0.5571 | 2.2005 | 878 | 0.5504 | 0.5663 | 0.5504 | 0.7419 |
| 0.5571 | 2.2055 | 880 | 0.6754 | 0.5995 | 0.6754 | 0.8218 |
| 0.5571 | 2.2105 | 882 | 0.7498 | 0.5779 | 0.7498 | 0.8659 |
| 0.5571 | 2.2155 | 884 | 0.6529 | 0.5812 | 0.6529 | 0.8080 |
| 0.5571 | 2.2206 | 886 | 0.5076 | 0.4907 | 0.5076 | 0.7124 |
| 0.5571 | 2.2256 | 888 | 0.4897 | 0.4806 | 0.4897 | 0.6998 |
| 0.5571 | 2.2306 | 890 | 0.5096 | 0.5543 | 0.5096 | 0.7139 |
| 0.5571 | 2.2356 | 892 | 0.6177 | 0.5946 | 0.6177 | 0.7859 |
| 0.5571 | 2.2406 | 894 | 0.6219 | 0.5986 | 0.6219 | 0.7886 |
| 0.5571 | 2.2456 | 896 | 0.4861 | 0.5611 | 0.4861 | 0.6972 |
| 0.5571 | 2.2506 | 898 | 0.4568 | 0.5383 | 0.4568 | 0.6759 |
| 0.5571 | 2.2556 | 900 | 0.4699 | 0.5573 | 0.4699 | 0.6855 |
| 0.5571 | 2.2607 | 902 | 0.4745 | 0.5613 | 0.4745 | 0.6889 |
| 0.5571 | 2.2657 | 904 | 0.4871 | 0.5536 | 0.4871 | 0.6979 |
| 0.5571 | 2.2707 | 906 | 0.4646 | 0.4899 | 0.4646 | 0.6816 |
| 0.5571 | 2.2757 | 908 | 0.4678 | 0.5065 | 0.4678 | 0.6839 |
| 0.5571 | 2.2807 | 910 | 0.5217 | 0.5645 | 0.5217 | 0.7223 |
| 0.5571 | 2.2857 | 912 | 0.5797 | 0.6032 | 0.5797 | 0.7614 |
| 0.5571 | 2.2907 | 914 | 0.5562 | 0.5842 | 0.5562 | 0.7458 |
| 0.5571 | 2.2957 | 916 | 0.6029 | 0.6056 | 0.6029 | 0.7765 |
| 0.5571 | 2.3008 | 918 | 0.5968 | 0.5872 | 0.5968 | 0.7725 |
| 0.5571 | 2.3058 | 920 | 0.5587 | 0.5473 | 0.5587 | 0.7475 |
| 0.5571 | 2.3108 | 922 | 0.5500 | 0.5395 | 0.5500 | 0.7417 |
| 0.5571 | 2.3158 | 924 | 0.5514 | 0.5543 | 0.5514 | 0.7426 |
| 0.5571 | 2.3208 | 926 | 0.5263 | 0.5547 | 0.5263 | 0.7255 |
| 0.5571 | 2.3258 | 928 | 0.5485 | 0.5922 | 0.5485 | 0.7406 |
| 0.5571 | 2.3308 | 930 | 0.5737 | 0.6008 | 0.5737 | 0.7574 |
| 0.5571 | 2.3358 | 932 | 0.5434 | 0.6066 | 0.5434 | 0.7371 |
| 0.5571 | 2.3409 | 934 | 0.5346 | 0.6080 | 0.5346 | 0.7312 |
| 0.5571 | 2.3459 | 936 | 0.4692 | 0.5696 | 0.4692 | 0.6850 |
| 0.5571 | 2.3509 | 938 | 0.4761 | 0.5688 | 0.4761 | 0.6900 |
| 0.5571 | 2.3559 | 940 | 0.5221 | 0.5984 | 0.5221 | 0.7225 |
| 0.5571 | 2.3609 | 942 | 0.5685 | 0.6334 | 0.5685 | 0.7540 |
| 0.5571 | 2.3659 | 944 | 0.4934 | 0.5824 | 0.4934 | 0.7024 |
| 0.5571 | 2.3709 | 946 | 0.4530 | 0.5297 | 0.4530 | 0.6730 |
| 0.5571 | 2.3759 | 948 | 0.4609 | 0.5680 | 0.4609 | 0.6789 |
| 0.5571 | 2.3810 | 950 | 0.6082 | 0.6681 | 0.6082 | 0.7799 |
| 0.5571 | 2.3860 | 952 | 0.7026 | 0.6677 | 0.7026 | 0.8382 |
| 0.5571 | 2.3910 | 954 | 0.5648 | 0.6498 | 0.5648 | 0.7515 |
| 0.5571 | 2.3960 | 956 | 0.5489 | 0.6313 | 0.5489 | 0.7408 |
| 0.5571 | 2.4010 | 958 | 0.5206 | 0.5985 | 0.5206 | 0.7216 |
| 0.5571 | 2.4060 | 960 | 0.5268 | 0.6089 | 0.5268 | 0.7258 |
| 0.5571 | 2.4110 | 962 | 0.5396 | 0.6242 | 0.5396 | 0.7346 |
| 0.5571 | 2.4160 | 964 | 0.5595 | 0.6424 | 0.5595 | 0.7480 |
| 0.5571 | 2.4211 | 966 | 0.5666 | 0.6361 | 0.5666 | 0.7527 |
| 0.5571 | 2.4261 | 968 | 0.5291 | 0.6188 | 0.5291 | 0.7274 |
| 0.5571 | 2.4311 | 970 | 0.5694 | 0.6359 | 0.5694 | 0.7546 |
| 0.5571 | 2.4361 | 972 | 0.6898 | 0.6676 | 0.6898 | 0.8306 |
| 0.5571 | 2.4411 | 974 | 0.6954 | 0.6658 | 0.6954 | 0.8339 |
| 0.5571 | 2.4461 | 976 | 0.6629 | 0.6456 | 0.6629 | 0.8142 |
| 0.5571 | 2.4511 | 978 | 0.5466 | 0.6189 | 0.5466 | 0.7393 |
| 0.5571 | 2.4561 | 980 | 0.5387 | 0.6122 | 0.5387 | 0.7340 |
| 0.5571 | 2.4612 | 982 | 0.5764 | 0.6032 | 0.5764 | 0.7592 |
| 0.5571 | 2.4662 | 984 | 0.5706 | 0.6140 | 0.5706 | 0.7554 |
| 0.5571 | 2.4712 | 986 | 0.6569 | 0.6213 | 0.6569 | 0.8105 |
| 0.5571 | 2.4762 | 988 | 0.6889 | 0.6327 | 0.6889 | 0.8300 |
| 0.5571 | 2.4812 | 990 | 0.6509 | 0.6402 | 0.6509 | 0.8068 |
| 0.5571 | 2.4862 | 992 | 0.6564 | 0.6480 | 0.6564 | 0.8102 |
| 0.5571 | 2.4912 | 994 | 0.6380 | 0.6511 | 0.6380 | 0.7987 |
| 0.5571 | 2.4962 | 996 | 0.6702 | 0.6448 | 0.6702 | 0.8186 |
| 0.5571 | 2.5013 | 998 | 0.8139 | 0.6552 | 0.8139 | 0.9021 |
| 0.2198 | 2.5063 | 1000 | 0.7314 | 0.6696 | 0.7314 | 0.8552 |
| 0.2198 | 2.5113 | 1002 | 0.5583 | 0.6303 | 0.5583 | 0.7472 |
| 0.2198 | 2.5163 | 1004 | 0.4692 | 0.5565 | 0.4692 | 0.6850 |
| 0.2198 | 2.5213 | 1006 | 0.4702 | 0.5623 | 0.4702 | 0.6857 |
| 0.2198 | 2.5263 | 1008 | 0.5591 | 0.6164 | 0.5591 | 0.7478 |
| 0.2198 | 2.5313 | 1010 | 0.5838 | 0.6106 | 0.5838 | 0.7640 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mrTvister/vovka | mrTvister | 2024-11-06T18:37:25Z | 314 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-06T18:34:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
v0v1k. Animated illustration in Russian animation style of Ariel the Little
Mermaid and Sebastian the crab, inspired by 'Vovka in the Far Far Away
Kingdom'. Ariel has a bright red flowing hair, radiant coral-colored tail
with iridescent scales, wearing a purple seashell top. Sebastian is stylized
with exaggerated cartoony features, bright crimson shell. Around them is an
underwater scene with curvy, playful seaweed in turquoise and lime colors,
pink and orange coral formations, colorful tropical fish swimming about. The
water has a soft blue-green tint with bubbles floating upward. Background
features the underwater castle with whimsical curved spires and domes in
pastel colors.
output:
url: images/lora_image_1 (1).webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: v0v1k
---
# Vovka in a Far Far Away Kingdom
<Gallery />
## Trigger words
You should use `v0v1k` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mrTvister/vovka/tree/main) them in the Files & versions tab.
|
wasmdashai/Llama-3.2-1B-v1 | wasmdashai | 2024-11-06T18:22:04Z | 144 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T18:11:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maxrs/leichte-sprache2image | maxrs | 2024-11-06T18:14:44Z | 13 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-11-06T17:50:09Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
The image shows a bowl filled with a variety of fruits, which are often
associated with being rich in vitamins. The fruits include a pineapple, a
bunch of purple grapes, a banana, an apple, and a strawberry. leichte
sprache style
output:
url: images/Vitamine_CS_D.png
- text: >-
The image shows an illustration of a person and a dog. The person appears to
be a woman with blonde hair, wearing a gray sweater, blue polka dot pants,
and brown shoes. She is holding a harness attached to a brown dog, which is
wearing a red and white harness. leichte sprache style
output:
url: images/Blindenhund_RS_IC.png
- text: >-
The image shows an illustration of a man in a wheelchair. He appears to be
looking upwards, possibly towards a set of stairs or a barrier that he is
facing. The title "Barrier" suggests that the image might be commenting on
the challenges or obstacles that people with disabilities may encounter in
their daily lives. leichte sprache style
output:
url: images/Barriere_CS_D.png
- text: >-
Young people help older people to use mobile phone and laptop top. leichte
sprache style
output:
url: images/Digital im Alter_CS_D-000010.png
- text: >-
The image shows a depiction of an unhealthy diet, consisting of a burger,
french fries, hot dogs, and a bottle of soda. These items are often associated
with fast food and are typically high in calories, unhealthy fats, and sodium,
which can contribute to health issues when consumed in excess. leichte sprache style
output:
url: images/Ungesunde Ernährung_CS_IC-000010.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: leichte sprache style
---
# Leichte-Sprache2Image
<Gallery />
## Model description
These LoRA-Checkpoints were created with four different variations of one dataset (CS_D, CS_IC, RS_D & RS_IC). For each dataset-varation there are checkpoints from the 5th (-000005), 10th (-0000010) and 20th (-0000020) epoch. The LoRAs should be able to generate images in a cartoon like style to support texts in easy language (german "Leichte Sprache").
## Trigger words
You should use `leichte sprache style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/maxrs/leichte-sprache2image/tree/main) them in the Files & versions tab.
|
FathomNet/fathomnet2023-comp-baseline | FathomNet | 2024-11-06T18:08:06Z | 4 | 0 | null | [
"ocean",
"benthic",
"object-detection",
"arxiv:2307.08781",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2023-07-19T14:27:15Z | ---
license: cc-by-4.0
tags:
- ocean
- benthic
- object-detection
pipeline_tag: object-detection
---
# FathomNet2023 Baseline Model
## Model Details
- Trained by researchers at [Monterey Bay Aquarium Research Institute](https://www.mbari.org/) (MBARI) as a baseline for the [FathomNet2023 Competition](https://www.kaggle.com/competitions/fathomnet-out-of-sample-detection/overview) presented with the [Fine Grained Visual Categorization workshop](https://sites.google.com/view/fgvc10/home) at CVPR 2023.
- [Ultralytics YOLOv8.0.117](https://github.com/ultralytics/ultralytics/pull/3145)
- Object detection
- Fine tuned yolov8m to detect 290 fine grained taxonmic categories of benthic animals in the Greater Monterey Bay Area off the coast of Central California.
## Intended Use
- Make detections on images collect on the sea floor in the Monterey Bay Area.
## Factors
- Distribution shifts related to sampling platform, camera parameters, illumination, and deployment environment are expected to impact model performance.
- Evaluation was performed on an IID subset of available training data.
- Data to test out of distribution performance can be found on the [competition Kaggle page](https://www.kaggle.com/competitions/fathomnet-out-of-sample-detection/overview).
## Metrics
- [Precision-Recall curve](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/PR_curve.png) and [per class accuracy]((https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/confusion_matrix.png)) were evaluated at test time.
- [email protected] = 0.33515
- Performance is quite variable depending on the target organism even when testing on in-distribution data.
- Identified out-of-sample images with a binary metric, returning [ROC ~= 0.7](https://arxiv.org/abs/2307.08781).
## Training and Evaluation Data
- Training data is the [FathomNet2023 competition split](https://www.kaggle.com/competitions/fathomnet-out-of-sample-detection/overview) and internal MBARI data
- Class labels have a [long tail and localizations occur throughout the frame](https://huggingface.co/FathomNet/fathomnet2023-comp-baseline/blob/main/plots/labels.jpg).
## Deployment
In an environment running YOLOv8:
```
python classify/predict.py --weights fathomnet23-comp-baseline.pt --data data/images/
``` |
FathomNet/MBARI-midwater-supercategory-detector | FathomNet | 2024-11-06T18:08:01Z | 4 | 0 | null | [
"tensorboard",
"ocean",
"midwater",
"object-detection",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2023-05-18T19:14:57Z | ---
license: cc-by-4.0
tags:
- ocean
- midwater
- object-detection
---
# MBARI Midwater Supercategory Detector
## Model Details
- Trained by researchers at [CVisionAI](https://www.cvisionai.com/) and the [Monterey Bay Aquarium Research Institute](https://www.mbari.org/) (MBARI).
- [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2)
- Object detection
- Fine tuned yolov5l to detect 22 morhpotaxonmic categories of midwater animals in the Greater Monterey Bay Area off the coast of Central California.
## Intended Use
- Make real time detections on video feed from MBARI Remotely Operated Vehicles.
- Post-process video collected in the region by MBARI vehicles.
## Factors
- Distribution shifts related to sampling platform, camera parameters, illumination, and deployment environment are expected to impact model performance.
- Evaluation was performed on an IID subset of available training data. Data to test out of distribution performance not currently available.
## Metrics
- [Precision-Recall curve](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/PR_curve.png) and [per class accuracy]((https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/confusion_matrix.png)) were evaluated at test time.
- [email protected] = 0.866
- Indicates reasonably good performance for target task.
## Training and Evaluation Data
- A combination of publicly available [FathomNet](https://fathomnet.org/fathomnet/#/) and internal MBARI data
- Class labels have a [long tail and localizations occur throughout the frame](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/labels.jpg).
## Deployment
In an environment running [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2):
```
python classify/predict.py --weights best.pt --data data/images/
``` |
FathomNet/MBARI-315k-yolov8 | FathomNet | 2024-11-06T18:07:55Z | 16 | 1 | null | [
"ocean",
"midwater",
"benthic",
"object-detection",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2023-08-22T19:21:47Z | ---
license: cc-by-4.0
tags:
- ocean
- midwater
- benthic
- object-detection
---
# MBARI Monterey Bay 315k YOLOv8
<!-- TODO: Fill out the model card
## Model Details
- Trained by researchers at [CVisionAI](https://www.cvisionai.com/) and the [Monterey Bay Aquarium Research Institute](https://www.mbari.org/) (MBARI).
- [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2)
- Object detection
- Fine tuned yolov5l to detect 22 morhpotaxonmic categories of midwater animals in the Greater Monterey Bay Area off the coast of Central California.
## Intended Use
- Make real time detections on video feed from MBARI Remotely Operated Vehicles.
- Post-process video collected in the region by MBARI vehicles.
## Factors
- Distribution shifts related to sampling platform, camera parameters, illumination, and deployment environment are expected to impact model performance.
- Evaluation was performed on an IID subset of available training data. Data to test out of distribution performance not currently available.
## Metrics
- [Precision-Recall curve](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/PR_curve.png) and [per class accuracy]((https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/confusion_matrix.png)) were evaluated at test time.
- [email protected] = 0.866
- Indicates reasonably good performance for target task.
## Training and Evaluation Data
- A combination of publicly available [FathomNet](https://fathomnet.org/fathomnet/#/) and internal MBARI data
- Class labels have a [long tail and localizations occur throughout the frame](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/labels.jpg).
## Deployment
In an environment running [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2):
```
python classify/predict.py --weights best.pt --data data/images/
```
--> |
FathomNet/MBARI-315k-yolov5 | FathomNet | 2024-11-06T18:07:52Z | 3 | 0 | null | [
"ocean",
"midwater",
"benthic",
"object-detection",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2023-08-22T19:19:55Z | ---
license: cc-by-4.0
tags:
- ocean
- midwater
- benthic
- object-detection
---
# MBARI Monterey Bay 315k YOLOv5
<!-- TODO: Fill out the model card
## Model Details
- Trained by researchers at [CVisionAI](https://www.cvisionai.com/) and the [Monterey Bay Aquarium Research Institute](https://www.mbari.org/) (MBARI).
- [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2)
- Object detection
- Fine tuned yolov5l to detect 22 morhpotaxonmic categories of midwater animals in the Greater Monterey Bay Area off the coast of Central California.
## Intended Use
- Make real time detections on video feed from MBARI Remotely Operated Vehicles.
- Post-process video collected in the region by MBARI vehicles.
## Factors
- Distribution shifts related to sampling platform, camera parameters, illumination, and deployment environment are expected to impact model performance.
- Evaluation was performed on an IID subset of available training data. Data to test out of distribution performance not currently available.
## Metrics
- [Precision-Recall curve](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/PR_curve.png) and [per class accuracy]((https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/confusion_matrix.png)) were evaluated at test time.
- [email protected] = 0.866
- Indicates reasonably good performance for target task.
## Training and Evaluation Data
- A combination of publicly available [FathomNet](https://fathomnet.org/fathomnet/#/) and internal MBARI data
- Class labels have a [long tail and localizations occur throughout the frame](https://huggingface.co/FathomNet/MBARI-midwater-supercategory-detector/blob/main/plots/labels.jpg).
## Deployment
In an environment running [YOLOv5v6.2](https://github.com/ultralytics/yolov5/tree/v6.2):
```
python classify/predict.py --weights best.pt --data data/images/
```
--> |
1g0rrr/grab_candy | 1g0rrr | 2024-11-06T17:57:31Z | 13 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-11-06T17:57:24Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
clementdevarieux/my_awesome_wnut_model | clementdevarieux | 2024-11-06T17:54:55Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T17:54:22Z | ---
library_name: transformers
license: mit
base_model: almanach/camembert-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [almanach/camembert-base](https://huggingface.co/almanach/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0159
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 160 | 0.1652 | 0.0 | 0.0 | 0.0 | 0.9528 |
| No log | 2.0 | 320 | 0.0499 | 0.0 | 0.0 | 0.0 | 0.9943 |
| No log | 3.0 | 480 | 0.0303 | 0.0 | 0.0 | 0.0 | 0.9960 |
| 0.1412 | 4.0 | 640 | 0.0239 | 0.0 | 0.0 | 0.0 | 0.9967 |
| 0.1412 | 5.0 | 800 | 0.0206 | 0.0 | 0.0 | 0.0 | 0.9968 |
| 0.1412 | 6.0 | 960 | 0.0186 | 0.0 | 0.0 | 0.0 | 0.9969 |
| 0.0254 | 7.0 | 1120 | 0.0173 | 0.0 | 0.0 | 0.0 | 0.9970 |
| 0.0254 | 8.0 | 1280 | 0.0165 | 0.0 | 0.0 | 0.0 | 0.9970 |
| 0.0254 | 9.0 | 1440 | 0.0161 | 0.0 | 0.0 | 0.0 | 0.9970 |
| 0.0184 | 10.0 | 1600 | 0.0159 | 0.0 | 0.0 | 0.0 | 0.9970 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Renwar0011/meme-coin-art | Renwar0011 | 2024-11-06T17:54:27Z | 50 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-06T17:54:18Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: memeart12
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# meme_coin_art
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `memeart12` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Zenith-7B-dpo-v3-GGUF | mradermacher | 2024-11-06T17:54:15Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"Zenith-7B-dpo-v3",
"en",
"base_model:Xenon1/Zenith-7B-dpo-v3",
"base_model:quantized:Xenon1/Zenith-7B-dpo-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-04T07:32:04Z | ---
base_model: Xenon1/Zenith-7B-dpo-v3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mistral
- Zenith-7B-dpo-v3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Xenon1/Zenith-7B-dpo-v3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Zenith-7B-dpo-v3-GGUF/resolve/main/Zenith-7B-dpo-v3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pucpr-br/sbertimbau_news_2018 | pucpr-br | 2024-11-06T17:52:30Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:00:52Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2018
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2018')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2018')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2018')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2018)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
pucpr-br/sbertimbau_news_2019 | pucpr-br | 2024-11-06T17:52:10Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:01:08Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2019
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2019')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2019')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2019')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2019)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
amd/PixArt-Sigma-Nitro | amd | 2024-11-06T17:52:06Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"dataset:poloclub/diffusiondb",
"arxiv:2403.12015",
"base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"base_model:finetune:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-05T17:51:17Z | ---
license: apache-2.0
datasets:
- poloclub/diffusiondb
base_model:
- PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
pipeline_tag: text-to-image
library_name: diffusers
---
# AMD Nitro Diffusion

## Introduction
AMD Nitro Diffusion is a series of efficient text-to-image generation models that are distilled from popular diffusion models on AMD Instinct™ GPUs. The release consists of:
* [Stable Diffusion 2.1 Nitro](https://huggingface.co/amd/SD2.1-Nitro): a UNet-based one-step model distilled from [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1-base).
* [PixArt-Sigma Nitro](https://huggingface.co/amd/PixArt-Sigma-Nitro): a high resolution transformer-based one-step model distilled from [PixArt-Sigma](https://pixart-alpha.github.io/PixArt-sigma-project/).
⚡️ [Open-source code](https://github.com/AMD-AIG-AIMA/AMD-Diffusion-Distillation)! The models are based on our re-implementation of [Latent Adversarial Diffusion Distillation](https://arxiv.org/abs/2403.12015), the method used to build the popular Stable Diffusion 3 Turbo model. Since the original authors didn't provide training code, we release our re-implementation to help advance further research in the field.
## Details
* **Model architecture**: PixArt-Sigma Nitro has the same architecture as PixArt-Sigma and is compatible with the diffusers pipeline.
* **Inference steps**: This model is distilled to perform inference in just a single step. However, the training code also supports distilling a model for 2, 4 or 8 steps.
* **Hardware**: We use a single node consisting of 4 AMD Instinct™ MI250 GPUs for distilling PixArt-Sigma Nitro.
* **Dataset**: We use 1M prompts from [DiffusionDB](https://huggingface.co/datasets/poloclub/diffusiondb) and generate the corresponding images from the base PixArt-Sigma model.
* **Training cost**: The distillation process achieves reasonable results in less than 2 days on a single node.
## Quickstart
```python
from diffusers import PixArtSigmaPipeline
import torch
from safetensors.torch import load_file
pipe = PixArtSigmaPipeline.from_pretrained("PixArt-alpha/PixArt-Sigma-XL-2-1024-MS")
ckpt_path = '<path to distilled checkpoint>'
transformer_state_dict = load_file(ckpt_path)
pipe.transformer.load_state_dict(transformer_state_dict)
pipe = pipe.to("cuda")
image = pipe(prompt='a photo of a cat',
num_inference_steps=1,
guidance_scale=0,
timesteps=[400]).images[0]
```
For more details on training and evaluation please visit the [GitHub repo](https://github.com/AMD-AIG-AIMA/AMD-Diffusion-Distillation).
## Results
Compared to [PixArt-Sigma](https://pixart-alpha.github.io/PixArt-sigma-project/), our model achieves a 90.9% reduction in FLOPs at the cost of just 3.7% lower CLIP score and 10.5% higher FID.
| Model | FID ↓ | CLIP ↑ |FLOPs| Latency on AMD Instinct MI250 (sec)
| :---: | :---: | :---: | :---: | :---:
| PixArt-Sigma, 20 steps | 34.14 | 0.3289 |187.96 | 7.46
| **PixArt-Sigma Nitro**, 1 step | 37.75 | 0.3167|17.04 | 0.53
## License
Copyright (c) 2018-2024 Advanced Micro Devices, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. |
Amine-CV/JLSCOM_garment_LoRA_flux_schnell_v1 | Amine-CV | 2024-11-06T17:52:00Z | 57 | 2 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-05T12:32:25Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: '[trigger] Garment Type: Slim-Fit Jeans Fit and Style: Slim-fit, designed
to hug the legs closely without being overly tight, offering a contemporary,
streamlined appearance. Color and Pattern: Soft pastel green in a solid shade,
adding a subtle pop of color to outfits while maintaining a minimalist, modern
look. Fabric/Material: Crafted from a stretch cotton blend, providing comfort,
flexibility, and durability. Details: Traditional five-pocket design with
two front pockets, two back pockets, and a small coin pocket, all seamlessly
integrated for functionality and style. Display Style: Displayed in a flat
lay to highlight the overall structure and color. Background and Lighting:
Set against a light gray background with soft, even lighting to bring out
the pastel hue of the jeans without overshadowing it. Shape: Fitted shape
with a tapered leg, maintaining a sleek and tailored silhouette from hip to
ankle. Closures: Secured with a standard button and zipper fly in matching
tones for a seamless look. Branding: Minimal branding with a discreet internal
label; no external logos, maintaining a clean, understated aesthetic. Cuffs
and Hems: Clean, stitched hems at the ankle, allowing the jeans to be worn
full-length or slightly rolled for a casual look. Fit: Slim yet comfortable,
allowing ease of movement while staying fitted through the legs. Length: Full
length, designed to sit right at the ankle, suitable for pairing with both
casual and semi-formal footwear. Occasion: Versatile enough for both casual
daily wear and smart-casual occasions, adding a fresh twist to any wardrobe.
Style Influence: Inspired by modern minimalist fashion, with a focus on clean
lines and a refined color palette. Seasonal Suitability: Ideal for spring
and summer wear due to the light color and breathable fabric. Texture: Smooth,
soft finish with a hint of stretch, ensuring comfort during prolonged wear.
Weight: Medium weight, suitable for warm weather without feeling too thin.
Finish: Matte finish, enhancing the soft, pastel tone for a polished, sophisticated
look. Aesthetic Style: Casual chic, blending comfort with a contemporary style
that is effortlessly versatile. Target Audience: Suitable for individuals
seeking stylish yet comfortable jeans with a unique color that is easy to
style. Ease of Care: Machine washable, with colorfastness to retain the pastel
shade after multiple washes.'
output:
url: samples/1730914281348__000004000_0.jpg
- text: '[trigger] Garment Type: Blazer Fit and Style: Regular fit with a tailored,
classic style that combines formality with a modern touch. Color and Pattern:
Soft sage green in a solid color, providing a subtle, sophisticated look.
Fabric/Material: Made from a lightweight, smooth wool blend, offering both
comfort and a structured appearance. Details: Features two-button closure
at the front, with a notched lapel collar for a classic blazer look. Includes
a single welt pocket on the chest and two large patch pockets at the lower
front for a stylish yet practical detail. Display Style: Displayed in a flat
lay to clearly showcase the entire structure and details of the blazer. Background
and Lighting: Light gray background with soft, even lighting to emphasize
the blazer''s color and details without distractions. Shape: Structured, slightly
fitted shape that follows a traditional blazer silhouette. Sleeve Style: Long
sleeves with a slight taper towards the wrist, finished with simple cuffs.
Closures: Front two-button closure with matte black buttons, adding to the
minimalistic and versatile design. Branding: Discreet internal branding tag
near the neckline, keeping the outer look clean and unbranded. Pockets: Three
pockets total''one welt pocket on the chest and two patch pockets at the front,
providing both functional and decorative elements. Cuffs and Hems: Standard
blazer hemline with neatly finished cuffs. Fit: Tailored, providing a structured
and polished look without being overly slim. Length: Standard blazer length,
designed to sit at the hip. Occasion: Suitable for formal occasions, work,
or smart-casual settings, adding a refined touch to any outfit. Style Influence:
Minimalist with classic tailoring, suitable for a modern professional look.
Seasonal Suitability: Ideal for spring and autumn due to its lightweight wool
blend material. Texture: Smooth with a slight sheen, giving a refined and
polished appearance. Weight: Medium weight, providing structure without being
too heavy, ideal for layering over shirts or light sweaters. Finish: Matte
finish, keeping the look understated and versatile for various settings. Aesthetic
Style: Clean, classic, and modern, suitable for professionals and style-conscious
individuals. Target Audience: Individuals looking for a versatile blazer that
can transition from formal work settings to more relaxed social gatherings.
Ease of Care: Dry clean recommended to maintain fabric quality and structure.'
output:
url: samples/1730914339817__000004000_1.jpg
- text: '[trigger] Garment Type: Denim Shorts Fit and Style: Regular fit with a
classic, relaxed style, offering comfort without being overly tight or baggy.
Color and Pattern: Light blue with a subtle faded wash, giving it a vintage-inspired
look and adding visual texture to the fabric. Fabric/Material: Made from soft,
breathable denim, likely a cotton blend for comfort and durability. Details:
Traditional five-pocket design with two front pockets, two back pockets, and
a small coin pocket within the right front pocket. Button closure and zipper
fly for secure fastening. Display Style: Displayed in a flat lay, showcasing
the shorts'' full length, pockets, and seams. Background and Lighting: Light
gray background with soft lighting, highlighting the light blue denim color
and subtle fading. Shape: Straight shape with a relaxed fit, ensuring comfort
around the thighs and hips. Closures: Front button and zipper closure, a classic
feature in denim shorts for easy wear. Embellishments: Minimal design with
no additional embellishments, maintaining a clean and simple aesthetic. Branding:
Discreet brand tag on the internal waistband; no external logos. Patterns/Prints:
Solid denim with a faded wash effect, adding dimension without any specific
patterns or prints. Pockets: Five-pocket design for both style and functionality,
allowing for small item storage. Cuffs and Hems: Standard hem style, ending
just above the knee, offering a casual and versatile length. Fit: Regular
fit that balances comfort and style, suitable for various body types. Length:
Knee-length, ideal for warm weather and casual wear. Occasion: Perfect for
casual, everyday wear during spring and summer, or for outdoor and relaxed
settings. Style Influence: Classic and timeless, inspired by traditional denim
styles with a slight vintage wash for a modern twist. Seasonal Suitability:
Best suited for spring and summer, given its breathable fabric and knee-length
design. Texture: Smooth denim texture, slightly faded for added character.
Weight: Medium weight, providing durability without being too heavy, ideal
for warmer climates. Finish: Matte finish with a faded wash effect, giving
it a lived-in, relaxed appearance. Aesthetic Style: Simple, casual, and versatile,
fitting well into a variety of everyday wardrobes. Target Audience: Suitable
for individuals seeking a comfortable and classic pair of denim shorts for
casual wear. Ease of Care: Machine washable for easy maintenance, durable
enough for frequent wear.'
output:
url: samples/1730914398336__000004000_2.jpg
- text: '[trigger] Garment Type: Chino Pants Fit and Style: Regular fit with a classic,
casual style, offering a comfortable and versatile silhouette. Color and Pattern:
Solid olive green, adding a subtle yet stylish touch to a neutral wardrobe.
Fabric/Material: Made from a durable cotton blend, providing comfort with
a slight stretch for flexibility. Details: Includes belt loops and a button
with zipper closure, adding functionality while maintaining a clean look.
Display Style: Displayed in a flat lay, highlighting the shape and structure
of the pants. Background and Lighting: Light gray background with soft, even
lighting that enhances the color and minimalist design. Shape: Straight-leg
shape with a relaxed but structured fit, ideal for casual and semi-formal
occasions. Closures: Front button and zipper closure for ease of wear and
a secure fit. Embellishments: No embellishments, keeping the design simple
and versatile. Branding: Likely includes a discreet internal label; no visible
external branding for a clean look. Patterns/Prints: Solid color with no patterns
or prints, emphasizing a minimalist, versatile aesthetic. Pockets: Four-pocket
design, featuring two front slash pockets and two rear welt pockets, providing
practical storage without disrupting the streamlined appearance. Cuffs and
Hems: Finished hems with a slight roll-up detail, adding a subtle casual touch.
Fit: Regular fit, offering a balance between tailored and relaxed for everyday
comfort. Length: Full length, intended to reach the ankle or slightly above
when rolled. Occasion: Suitable for casual outings, smart-casual settings,
or weekend wear, pairing well with a variety of tops. Style Influence: Minimalist
and classic, with a utilitarian feel that makes it adaptable for different
styles. Seasonal Suitability: Ideal for spring and fall, with a medium-weight
fabric that offers comfort in mild weather. Texture: Smooth and slightly textured,
providing a refined look with a soft touch. Weight: Medium weight, suitable
for transitioning between seasons. Finish: Matte finish, enhancing the subtle
elegance and versatility of the pants. Aesthetic Style: Classic and functional,
appealing to those who prefer understated, adaptable wardrobe staples. Target
Audience: Perfect for individuals seeking comfortable yet stylish pants that
can be dressed up or down. Ease of Care: Machine washable for easy maintenance,
making it a practical addition to any wardrobe.'
output:
url: samples/1730914456843__000004000_3.jpg
- text: '[trigger] Garment Type: Half-Zip Sweater Fit and Style: Regular fit with
a slightly tailored silhouette, offering a balanced look that isn''t too snug
or too loose, suitable for layering. Color and Pattern: Dark heathered gray
on the main body with solid black on the sleeves, shoulders, and cuffs. The
sweater has a subtle color-blocked design that adds contrast and visual interest.
Fabric/Material: Made from a soft and durable blend of cotton and polyester,
providing warmth and breathability. Details: Features a quarter-length zip
closure at the neckline for adjustability and styling flexibility, along with
a single welt pocket on the left chest for added functionality and style.
Display Style: Displayed in a flat lay style, giving a clear view of the design
and structure of the garment. Background and Lighting: Set against a neutral
gray background with soft, even lighting that highlights the contrast between
the gray and black areas of the sweater. Shape: Slightly tapered at the waist
and cuffs, providing a clean and structured look. Sleeve Style: Long sleeves
in solid black, contributing to the color-blocked effect. Neckline: High collar
with a half-zip for added versatility and a modern, sporty look. Closures:
Half-zip closure at the neck with a metal zipper, allowing for easy adjustments
to the neckline. Cuffs and Hems: Ribbed cuffs and hem to provide a snug fit
and help retain warmth, adding to the overall polished look. Fit: Regular
fit that maintains comfort and ease of movement, making it suitable for everyday
wear. Length: Standard length, falling around the hip, suitable for both layering
and wearing alone. Occasion: Ideal for casual outings, relaxed office environments,
or cool-weather activities. Style Influence: A blend of sporty and casual
influences, offering a versatile look that works well in urban or laid-back
settings. Seasonal Suitability: Suited for fall and winter, with a design
that provides warmth while being comfortable for transitional weather. Texture:
Smooth fabric on the body with a slight heathered effect, contrasting with
the solid, slightly textured black on the sleeves. Weight: Medium weight,
providing warmth without being overly bulky. Finish: Matte finish, lending
a sophisticated and modern look to the garment. Layer Style: Can be layered
over a T-shirt or under a jacket for added warmth. Collar Type: High collar
with zip functionality, providing both style and protection against cooler
weather. Aesthetic Style: Modern, minimalist with a sporty edge, appealing
to those who prefer subtle yet stylish details. Size Labeling: Standard fit,
to accommodate various body types. Climate Suitability: Perfect for mild to
cool climates where a lightweight yet warm layer is needed. Target Audience:
Designed for individuals who enjoy practical, comfortable, and stylish apparel
that can transition between casual and slightly dressed-up looks. Style Preferences:
Appeals to those who prefer versatile, understated fashion with a hint of
athletic influence. Ease of Care: Machine washable for easy maintenance and
durability. This half-zip sweater combines practicality and style, with a
modern color-blocked design that provides visual contrast. Its regular fit,
warm fabric, and versatile look make it a reliable choice for cooler weather,
perfect for layering or wearing on its own.'
output:
url: samples/1730914515347__000004000_4.jpg
- text: '[trigger] Garment Type: Chino Pants Fit and Style: Tailored slim-fit chinos,
offering a modern and polished appearance suitable for both casual and semi-formal
occasions. Color and Pattern: Soft beige color with a solid, clean finish,
making it versatile and easy to pair with various tops. Fabric/Material: Made
from a high-quality cotton blend, ensuring breathability and comfort with
a slight stretch for ease of movement. Details: Features a classic design
with belt loops, a front button closure, and a zip fly. There are no visible
logos, maintaining a minimalist aesthetic. Display Style: Displayed in a flat
lay format to showcase the structure and length clearly. Background and Lighting:
Set against a light cream background with soft, diffused lighting to enhance
the neutral tone of the fabric. Shape: Slim, tapered shape that offers a clean
silhouette without appearing too tight. Closures: Front button and zipper
closure provide a secure and classic look. Pockets: Includes two side pockets
and two back welt pockets, adding functionality without disrupting the streamlined
appearance. Cuffs and Hems: Finished with a standard hem at the ankles, giving
a neat and classic look that works well with both dress shoes and casual footwear.
Fit: Slim-fit, offering a modern look that''s both stylish and comfortable.
Length: Full-length chinos that reach the ankles, suitable for a variety of
occasions. Occasion: Ideal for both casual and business-casual settings; can
be dressed up or down with ease. Style Influence: Inspired by classic menswear
with a touch of contemporary minimalism, making it a wardrobe staple. Seasonal
Suitability: Suitable for all seasons, particularly spring and autumn, due
to the breathable cotton material. Texture: Smooth and soft to the touch,
providing a comfortable wear experience. Weight: Lightweight, making it comfortable
for day-long wear. Finish: Matte finish, contributing to the sophisticated
look of the pants. Aesthetic Style: Clean and minimal, catering to a classic
yet contemporary style preference. Target Audience: Designed for individuals
seeking a polished look that balances style with comfort. Ease of Care: Machine
washable and easy to care for, adding to its practicality for everyday wear.
These chinos are a versatile, refined addition to any wardrobe, offering a
polished, understated style suitable for both work and leisure. Their neutral
color and tailored fit make them ideal for a range of outfit combinations,
providing an effortlessly stylish look.'
output:
url: samples/1730914573841__000004000_5.jpg
- text: '[trigger] Garment Type: Turtleneck sweater Fit and Style: Regular fit,
slightly relaxed for a cozy yet polished appearance, suitable for layering
or wearing alone. Color and Pattern: Solid, creamy off-white color with no
additional patterns, giving it a clean, minimalist look. Fabric/Material:
Made from a soft, warm wool or wool-blend fabric, ideal for maintaining warmth
while providing breathability. Details: Ribbed texture around the turtleneck,
cuffs, and hem for added style and durability, with a fine knit texture across
the body and sleeves. Display Style: Displayed in a flat lay style, showing
the entire sweater''s design and structure clearly. Background and Lighting:
Set against a light grey background with soft, diffused lighting, creating
a neutral and inviting mood that complements the sweater''s color. Shape:
Straight shape with a slightly boxy silhouette, designed to drape comfortably
without clinging to the body. Sleeve Style: Long sleeves with ribbed cuffs
that provide a snug fit around the wrists, adding a refined detail to the
overall design. Neckline: High turtleneck collar, ribbed for texture and warmth,
designed to fold over or stand upright depending on the wearer''s preference.
Closures: No closures, emphasizing the pullover style of the sweater. Embellishments:
None, keeping the look simple and classic. Branding: No visible branding,
aligning with a minimalist, unbranded style. Cuffs and Hems: Ribbed cuffs
and hem add subtle structure and a tailored finish to the overall relaxed
fit. Fit: Regular fit, providing enough room for comfort while still maintaining
a tailored look. Length: Standard length, ending around the hip, making it
versatile for pairing with jeans, trousers, or skirts. Occasion: Suitable
for casual, business casual, or semi-formal settings, depending on styling.
Style Influence: Minimalist and classic, perfect for a timeless wardrobe.
Seasonal Suitability: Ideal for fall and winter wear due to the insulating
material and high neckline. Texture: Soft, smooth wool texture with a ribbed
finish on the turtleneck, cuffs, and hem. Weight: Medium-weight fabric, balancing
warmth and comfort. Finish: Matte finish, enhancing its cozy appearance. Aesthetic
Style: Simple yet chic, suitable for those who appreciate a minimalist, high-quality
look. Target Audience: Designed for individuals seeking versatile, comfortable,
and elegant winter clothing. Ease of Care: Likely requires gentle washing
or dry cleaning to maintain quality. This turtleneck sweater is a wardrobe
staple that combines comfort, warmth, and a refined, minimalist style, perfect
for a range of casual to semi-formal settings.'
output:
url: samples/1730914632315__000004000_6.jpg
- text: '[trigger] Garment Type: Corduroy Jacket Fit and Style: Relaxed fit with
a minimalist, utilitarian style. Color and Pattern: Light cream, solid color
with a smooth, uniform look. Fabric/Material: Made from corduroy, giving it
a soft, textured feel with visible ridges. Details: Features two large, square
patch pockets on the chest, adding a functional and balanced look. Display
Style: Displayed in a flat lay, showcasing the full front of the jacket. Background
and Lighting: Beige background with soft, diffused lighting, emphasizing the
jacket''s light color and texture. Shape: Slightly boxy shape, providing a
relaxed, unstructured silhouette. Sleeve Style: Long sleeves with a standard
cuff style, ending at the wrist. Neckline: Features a classic spread collar
for a clean, polished look. Closures: Full-length front zipper closure, adding
a modern, streamlined look. Embellishments: No additional embellishments,
maintaining a minimalist aesthetic. Branding: Discreet inner label tag at
the neckline; no external branding visible. Patterns/Prints: Solid color with
a natural corduroy texture. Pockets: Two front chest pockets with a square
shape and open top for easy access. Cuffs and Hems: Simple, finished hems
on sleeves and bottom; no added detail for a clean look. Fit: Relaxed fit
for comfortable layering over other clothing. Length: Standard jacket length,
ending just above the hip. Occasion: Suitable for casual wear, offering a
versatile layer for everyday outfits. Style Influence: Minimalist and utilitarian,
inspired by workwear aesthetics. Seasonal Suitability: Ideal for fall and
mild winter days due to its slightly heavier fabric. Texture: Soft corduroy
texture with subtle ridges for a tactile, cozy feel. Weight: Medium weight,
providing some warmth without being bulky. Finish: Matte finish, maintaining
the fabric''s natural appearance. Layer Style: Great as a top layer over T-shirts
or light sweaters. Collar Type: Spread collar, giving a structured yet casual
vibe. Aesthetic Style: Simple, functional aesthetic with a nod to vintage
workwear. Target Audience: Perfect for individuals who value understated,
practical clothing with a hint of vintage charm. Ease of Care: Likely machine
washable, with care recommended for the corduroy fabric.'
output:
url: samples/1730914690817__000004000_7.jpg
- text: '[trigger] Garment Type: Quarter-Zip Knit Sweater Fit and Style: This sweater
has a relaxed yet tailored fit, making it suitable for layering over shirts
or wearing solo for a polished, casual look. Color and Pattern: Light heather
grey, with a solid color and no additional patterns, offering a minimalist
aesthetic. Fabric/Material: Crafted from a soft wool-blend knit, providing
warmth and comfort while remaining lightweight. Details: Features a quarter-zip
closure with a metal zipper, adding a modern touch and versatility to the
classic sweater design. No visible logos or branding, keeping the look clean
and sophisticated. Display Style: Displayed flat lay, showcasing the sweater''s
structure and fit without distractions. Background and Lighting: Set against
a soft beige background with gentle lighting to highlight the texture and
neutral tone of the sweater. Shape: Straight silhouette with a slightly fitted
shape, ensuring a comfortable and refined appearance. Sleeve Style: Long sleeves
with ribbed cuffs that fit snugly around the wrists, adding structure to the
design. Neckline: Polo-style collar with a quarter-zip that allows for adjustable
coverage at the neck, creating options for styling. Closures: Quarter-zip
closure located at the center front, offering an adjustable neckline. Cuffs
and Hems: Ribbed cuffs and hem to provide a structured fit and prevent stretching
over time. Fit: Relaxed fit that maintains a tailored appearance, suitable
for both casual and semi-formal settings. Length: Falls at the hip, offering
a classic sweater length that pairs well with various bottoms. Occasion: Ideal
for smart-casual and business-casual settings, as well as relaxed outings
during cooler seasons. Style Influence: Minimalist design with a modern touch,
perfect for understated, versatile wardrobes. Seasonal Suitability: Suitable
for fall and winter due to the wool-blend material, which offers warmth without
bulk. Texture: Soft knit texture that gives a cozy and refined feel, adding
a subtle visual interest. Weight: Medium-weight, ensuring warmth and comfort
without feeling overly heavy. Finish: Matte finish, contributing to the sweater''s
sophisticated and understated aesthetic. Layer Style: Works well as a top
layer over shirts or under coats, providing versatility for different looks.
Collar Type: Polo-style collar with a zipper, blending classic and contemporary
elements. Aesthetic Style: Simple, clean, and chic, suitable for those who
appreciate refined yet relaxed pieces. Target Audience: Designed for individuals
seeking a blend of comfort, warmth, and style in cooler months. Ease of Care:
Likely requires gentle washing, such as hand-washing or machine-washing on
a wool cycle to maintain its texture and quality. This light heather grey
quarter-zip sweater is a versatile and stylish addition, perfect for layering
and providing warmth while maintaining a minimalist, modern appeal.'
output:
url: samples/1730914749320__000004000_8.jpg
- text: '[trigger] Garment Type: Drawstring Trousers Fit and Style: Relaxed-fit
trousers with a casual yet polished aesthetic, suitable for both informal
and semi-formal settings. Color and Pattern: Dark charcoal gray with a subtle,
solid pattern that offers a sleek, versatile look. Fabric/Material: Likely
a soft blend of cotton and polyester, providing comfort, durability, and a
hint of stretch. Details: Features an elastic waistband with a drawstring
closure for adjustable comfort, and two side pockets for functionality. Display
Style: Displayed in a flat lay, allowing a clear view of the garment''s shape,
style, and details. Background and Lighting: Set against a light, neutral
background with soft lighting, emphasizing the trousers'' dark tone and clean
lines. Shape: Straight-leg cut that gives a streamlined silhouette, with a
slightly tapered look at the hem for a modern feel. Closures: Elasticated
waistband with a drawstring, allowing for a secure, customizable fit without
the need for a belt. Pockets: Two slanted side pockets for convenient storage,
designed to be functional without disrupting the garment''s smooth lines.
Cuffs and Hems: Simple hem style, giving a neat finish to the trouser legs.
Fit: Relaxed fit, balancing comfort with a tailored appearance. Length: Full-length
trousers that fall straight to the ankles, versatile for various occasions.
Occasion: Suitable for casual outings, work-from-home days, or even dressed
up for a smart-casual event. Style Influence: Minimalist and modern, with
a hint of athleisure influence due to the drawstring waistband. Seasonal Suitability:
Ideal for year-round wear, thanks to its versatile color and comfortable material.
Texture: Smooth, with a slight texture that adds depth to the dark color without
detracting from the overall sleekness. Weight: Medium-weight fabric, suitable
for layering in cooler weather or as standalone wear in moderate climates.
Aesthetic Style: Casual chic with a functional design, bridging the gap between
casual comfort and refined style. Target Audience: Designed for individuals
seeking a comfortable yet stylish option for casual or semi-formal wear. Ease
of Care: Likely machine washable, making it easy to care for and maintain.
These dark charcoal drawstring trousers offer a versatile addition to any
wardrobe, combining relaxed comfort with a polished, minimalist aesthetic.
The elastic waistband and soft fabric make them ideal for all-day wear, while
the streamlined silhouette allows for effortless styling across different
occasions.'
output:
url: samples/1730914807778__000004000_9.jpg
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: JLSCOM
license: apache-2.0
---
# JLSCOM_garment_LoRA_flux_schnell
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `JLSCOM` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/Amine-CV/JLSCOM_garment_LoRA_flux_schnell_v1/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Amine-CV/JLSCOM_garment_LoRA_flux_schnell_v1', weight_name='JLSCOM_garment_LoRA_flux_schnell.safetensors')
image = pipeline('[trigger] Garment Type: Slim-Fit Jeans Fit and Style: Slim-fit, designed to hug the legs closely without being overly tight, offering a contemporary, streamlined appearance. Color and Pattern: Soft pastel green in a solid shade, adding a subtle pop of color to outfits while maintaining a minimalist, modern look. Fabric/Material: Crafted from a stretch cotton blend, providing comfort, flexibility, and durability. Details: Traditional five-pocket design with two front pockets, two back pockets, and a small coin pocket, all seamlessly integrated for functionality and style. Display Style: Displayed in a flat lay to highlight the overall structure and color. Background and Lighting: Set against a light gray background with soft, even lighting to bring out the pastel hue of the jeans without overshadowing it. Shape: Fitted shape with a tapered leg, maintaining a sleek and tailored silhouette from hip to ankle. Closures: Secured with a standard button and zipper fly in matching tones for a seamless look. Branding: Minimal branding with a discreet internal label; no external logos, maintaining a clean, understated aesthetic. Cuffs and Hems: Clean, stitched hems at the ankle, allowing the jeans to be worn full-length or slightly rolled for a casual look. Fit: Slim yet comfortable, allowing ease of movement while staying fitted through the legs. Length: Full length, designed to sit right at the ankle, suitable for pairing with both casual and semi-formal footwear. Occasion: Versatile enough for both casual daily wear and smart-casual occasions, adding a fresh twist to any wardrobe. Style Influence: Inspired by modern minimalist fashion, with a focus on clean lines and a refined color palette. Seasonal Suitability: Ideal for spring and summer wear due to the light color and breathable fabric. Texture: Smooth, soft finish with a hint of stretch, ensuring comfort during prolonged wear. Weight: Medium weight, suitable for warm weather without feeling too thin. Finish: Matte finish, enhancing the soft, pastel tone for a polished, sophisticated look. Aesthetic Style: Casual chic, blending comfort with a contemporary style that is effortlessly versatile. Target Audience: Suitable for individuals seeking stylish yet comfortable jeans with a unique color that is easy to style. Ease of Care: Machine washable, with colorfastness to retain the pastel shade after multiple washes.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
pucpr-br/sbertimbau_news_2020 | pucpr-br | 2024-11-06T17:51:39Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:01:18Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2020
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2020')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2020')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2020')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2020)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
pucpr-br/sbertimbau_news_2021 | pucpr-br | 2024-11-06T17:51:10Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:01:33Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2021
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2021')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2021')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2021')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2021)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
pucpr-br/sbertimbau_news_2022 | pucpr-br | 2024-11-06T17:50:16Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:01:40Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2022
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2022')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2022')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2022')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2022)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
besimray/miner1_bf80af68-32cc-43d3-b3e7-168fbf4be7e2_1730914108 | besimray | 2024-11-06T17:49:34Z | 5 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-06T17:28:28Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: miner1_bf80af68-32cc-43d3-b3e7-168fbf4be7e2_1730914108
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- MultiPL-E_train_data.json
ds_type: json
path: /workspace/input_data/MultiPL-E_train_data.json
type:
field_input: prompt
field_instruction: name
field_output: tests
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 5
eval_max_new_tokens: 128
eval_steps: 10
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hours_to_complete: 2
hub_model_id: besimray/miner1_bf80af68-32cc-43d3-b3e7-168fbf4be7e2_1730914108
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/MultiPL-E_train_data.json
model_type: LlamaForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
save_strategy: steps
sequence_len: 4096
started_at: '2024-11-06T17:28:28.787723'
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: besimray24-rayon
wandb_mode: online
wandb_project: Public_TuningSN
wandb_run: miner_id_24
wandb_runid: bf80af68-32cc-43d3-b3e7-168fbf4be7e2
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# miner1_bf80af68-32cc-43d3-b3e7-168fbf4be7e2_1730914108
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 52
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1241 | 0.0580 | 1 | 1.0218 |
| 0.4535 | 0.5797 | 10 | 0.3939 |
| 0.2759 | 1.1594 | 20 | 0.3536 |
| 0.2464 | 1.7391 | 30 | 0.3484 |
| 0.6037 | 2.3188 | 40 | 0.3479 |
| 0.3386 | 2.8986 | 50 | 0.3461 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sobhon125/sobhon_lora_chat_model_Biology_full | Sobhon125 | 2024-11-06T17:49:00Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T17:37:58Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Sobhon125
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pucpr-br/sbertimbau_news_2023 | pucpr-br | 2024-11-06T17:48:18Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-29T16:01:57Z | ---
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- pt
base_model:
- neuralmind/bert-base-portuguese-cased
---
# cristianomg10/sbertimbau_news_2023
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('cristianomg10/sbertimbau_news_2023')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('cristianomg10/sbertimbau_news_2023')
model = AutoModel.from_pretrained('cristianomg10/sbertimbau_news_2023')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=cristianomg10/sbertimbau_news_2023)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 250 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```
@inproceedings{imai2024isitfinetotune,
title={{Is it Fine to Tune? Evaluating SentenceBERT Fine-tuning for Brazilian Portuguese Text Stream Classification}},
author={Bruno Yuiti Leão Imai and Cristiano Mesquita Garcia and Marcio Vinicius Rocha and Alessandro Lameiras Koerich and Alceu de Souza Britto Jr and Jean Paul Barddal},
booktitle={IEEE Big Data},
year={2024},
organization={IEEE}
}
``` |
mradermacher/openchat-3.5-0106-11b-GGUF | mradermacher | 2024-11-06T17:46:12Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"openchat",
"mistral",
"C-RLFT",
"en",
"base_model:CallComply/openchat-3.5-0106-11b",
"base_model:quantized:CallComply/openchat-3.5-0106-11b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:21:55Z | ---
base_model: CallComply/openchat-3.5-0106-11b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- openchat
- mistral
- C-RLFT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/CallComply/openchat-3.5-0106-11b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openchat-3.5-0106-11b-GGUF/resolve/main/openchat-3.5-0106-11b.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jeongyoun/bert-base-uncased-finetuned-ner-increased | jeongyoun | 2024-11-06T17:45:50Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-08-30T12:12:15Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner-increased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-increased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0064
- Precision: 0.9933
- Recall: 0.9941
- F1: 0.9937
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0102 | 0.9997 | 1562 | 0.0078 | 0.9902 | 0.9925 | 0.9914 | 0.9974 |
| 0.0053 | 2.0 | 3125 | 0.0068 | 0.9940 | 0.9926 | 0.9933 | 0.9980 |
| 0.0032 | 2.9990 | 4686 | 0.0067 | 0.9942 | 0.9935 | 0.9939 | 0.9982 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mixklim/poca-SoccerTwos | mixklim | 2024-11-06T17:45:47Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-11-06T17:11:35Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mixklim/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ncls-p/esgi-td3-nlp | ncls-p | 2024-11-06T17:44:18Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T15:57:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
laurencassidy/lauren-tinyllama-1.1b-chat | laurencassidy | 2024-11-06T17:40:14Z | 5 | 0 | null | [
"safetensors",
"llama",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-11-06T17:20:31Z | ---
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
## Model Overview
This is a fine-tuned version of the Llama model trained using the ORPO (Optimized Ranked Preference Ordering) dataset (mlabonne/orpo-dpo-mix-40k) to enhance conversational and preference-based response generation.
The model uses the LoRA (Low-Rank Adaptation) technique to achieve efficient adaptation with minimal additional parameters, allowing it to learn task-specific knowledge without extensive computational demands.
## Hyperparameters
- LoRA Configuration: r=8,
- lora_alpha=16,
- lora_dropout=0.1
## ORPO Trainer Configuration:
- Learning Rate: 1e-5
- Max Length: 2048
- Batch Size: 1
- Epochs: 1
## Model Performance
The model was evaluated on the hellaswag task, yielding the following metrics:
- Accuracy: 46.59%
- Normalized Accuracy: 60.43% |
neopolita/gorilla-openfunctions-v2-gguf | neopolita | 2024-11-06T17:34:37Z | 15 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T17:01:54Z | ---
{}
---
# GGUF quants for [**gorilla-llm/gorilla-openfunctions-v2**](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
davidbzyk/QuantQwen2.5-32b-merged_16bit | davidbzyk | 2024-11-06T17:34:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T17:25:17Z | ---
base_model: unsloth/qwen2.5-32b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** davidbzyk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-32b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Xu-Ouyang/pythia-6.9b-deduped-int8-step16-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-06T17:34:09Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-06T17:32:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1 | MayBashendy | 2024-11-06T17:32:05Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T16:56:01Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5742
- Qwk: 0.5276
- Mse: 0.5742
- Rmse: 0.7578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0049 | 2 | 10.1128 | 0.0 | 10.1128 | 3.1801 |
| No log | 0.0098 | 4 | 8.9943 | 0.0 | 8.9943 | 2.9990 |
| No log | 0.0147 | 6 | 7.7995 | 0.0324 | 7.7995 | 2.7928 |
| No log | 0.0197 | 8 | 6.5605 | 0.0016 | 6.5605 | 2.5614 |
| No log | 0.0246 | 10 | 5.2675 | 0.0 | 5.2675 | 2.2951 |
| No log | 0.0295 | 12 | 4.1268 | 0.0093 | 4.1268 | 2.0315 |
| No log | 0.0344 | 14 | 2.9789 | 0.0303 | 2.9789 | 1.7259 |
| No log | 0.0393 | 16 | 2.1328 | 0.0040 | 2.1328 | 1.4604 |
| No log | 0.0442 | 18 | 1.6251 | 0.0 | 1.6251 | 1.2748 |
| No log | 0.0491 | 20 | 1.2514 | 0.2066 | 1.2514 | 1.1186 |
| No log | 0.0541 | 22 | 1.0487 | 0.0768 | 1.0487 | 1.0241 |
| No log | 0.0590 | 24 | 0.9011 | 0.0211 | 0.9011 | 0.9492 |
| No log | 0.0639 | 26 | 0.8605 | 0.0106 | 0.8605 | 0.9276 |
| No log | 0.0688 | 28 | 0.8888 | 0.0211 | 0.8888 | 0.9427 |
| No log | 0.0737 | 30 | 0.8545 | 0.0 | 0.8545 | 0.9244 |
| No log | 0.0786 | 32 | 0.8596 | 0.0 | 0.8596 | 0.9271 |
| No log | 0.0835 | 34 | 0.8934 | 0.0782 | 0.8934 | 0.9452 |
| No log | 0.0885 | 36 | 0.8689 | 0.0171 | 0.8689 | 0.9322 |
| No log | 0.0934 | 38 | 0.9398 | 0.0 | 0.9398 | 0.9694 |
| No log | 0.0983 | 40 | 0.9685 | 0.0 | 0.9685 | 0.9841 |
| No log | 0.1032 | 42 | 0.8812 | 0.0 | 0.8812 | 0.9387 |
| No log | 0.1081 | 44 | 0.9606 | 0.0 | 0.9606 | 0.9801 |
| No log | 0.1130 | 46 | 0.9836 | 0.0 | 0.9836 | 0.9918 |
| No log | 0.1179 | 48 | 0.9136 | 0.0 | 0.9136 | 0.9558 |
| No log | 0.1229 | 50 | 0.8807 | 0.0 | 0.8807 | 0.9385 |
| No log | 0.1278 | 52 | 0.9246 | 0.0 | 0.9246 | 0.9616 |
| No log | 0.1327 | 54 | 0.9487 | 0.0106 | 0.9487 | 0.9740 |
| No log | 0.1376 | 56 | 0.9474 | 0.0326 | 0.9474 | 0.9733 |
| No log | 0.1425 | 58 | 0.8869 | 0.0172 | 0.8869 | 0.9418 |
| No log | 0.1474 | 60 | 0.8318 | 0.0 | 0.8318 | 0.9120 |
| No log | 0.1523 | 62 | 0.8245 | 0.0 | 0.8245 | 0.9080 |
| No log | 0.1572 | 64 | 0.8219 | 0.0 | 0.8219 | 0.9066 |
| No log | 0.1622 | 66 | 0.8613 | 0.0 | 0.8613 | 0.9281 |
| No log | 0.1671 | 68 | 0.8330 | 0.0 | 0.8330 | 0.9127 |
| No log | 0.1720 | 70 | 0.8057 | 0.0067 | 0.8057 | 0.8976 |
| No log | 0.1769 | 72 | 0.7668 | 0.0390 | 0.7668 | 0.8757 |
| No log | 0.1818 | 74 | 0.7325 | 0.0276 | 0.7325 | 0.8558 |
| No log | 0.1867 | 76 | 0.7240 | 0.0443 | 0.7240 | 0.8509 |
| No log | 0.1916 | 78 | 0.7276 | 0.0645 | 0.7276 | 0.8530 |
| No log | 0.1966 | 80 | 0.7571 | 0.0752 | 0.7571 | 0.8701 |
| No log | 0.2015 | 82 | 0.7769 | 0.0752 | 0.7769 | 0.8814 |
| No log | 0.2064 | 84 | 0.7690 | 0.0645 | 0.7690 | 0.8769 |
| No log | 0.2113 | 86 | 0.7435 | 0.0583 | 0.7435 | 0.8623 |
| No log | 0.2162 | 88 | 0.7259 | 0.0276 | 0.7259 | 0.8520 |
| No log | 0.2211 | 90 | 0.7201 | 0.0379 | 0.7201 | 0.8486 |
| No log | 0.2260 | 92 | 0.7152 | 0.0482 | 0.7152 | 0.8457 |
| No log | 0.2310 | 94 | 0.7174 | 0.0482 | 0.7174 | 0.8470 |
| No log | 0.2359 | 96 | 0.7270 | 0.0470 | 0.7270 | 0.8526 |
| No log | 0.2408 | 98 | 0.7385 | 0.2595 | 0.7385 | 0.8593 |
| No log | 0.2457 | 100 | 0.7141 | 0.1470 | 0.7141 | 0.8450 |
| No log | 0.2506 | 102 | 0.7350 | 0.1244 | 0.7350 | 0.8573 |
| No log | 0.2555 | 104 | 0.7392 | 0.1205 | 0.7392 | 0.8598 |
| No log | 0.2604 | 106 | 0.7598 | 0.0568 | 0.7598 | 0.8716 |
| No log | 0.2654 | 108 | 0.8377 | 0.0444 | 0.8377 | 0.9153 |
| No log | 0.2703 | 110 | 0.8516 | 0.0418 | 0.8516 | 0.9228 |
| No log | 0.2752 | 112 | 0.8401 | 0.0431 | 0.8401 | 0.9166 |
| No log | 0.2801 | 114 | 0.8037 | 0.0520 | 0.8037 | 0.8965 |
| No log | 0.2850 | 116 | 0.7879 | 0.0728 | 0.7879 | 0.8877 |
| No log | 0.2899 | 118 | 0.7801 | 0.1424 | 0.7801 | 0.8832 |
| No log | 0.2948 | 120 | 0.7344 | 0.1201 | 0.7344 | 0.8570 |
| No log | 0.2998 | 122 | 0.6831 | 0.1459 | 0.6831 | 0.8265 |
| No log | 0.3047 | 124 | 0.6612 | 0.1889 | 0.6612 | 0.8131 |
| No log | 0.3096 | 126 | 0.6524 | 0.3548 | 0.6524 | 0.8077 |
| No log | 0.3145 | 128 | 0.6201 | 0.4054 | 0.6201 | 0.7874 |
| No log | 0.3194 | 130 | 0.5923 | 0.3200 | 0.5923 | 0.7696 |
| No log | 0.3243 | 132 | 0.6082 | 0.2435 | 0.6082 | 0.7799 |
| No log | 0.3292 | 134 | 0.6437 | 0.1258 | 0.6437 | 0.8023 |
| No log | 0.3342 | 136 | 0.6357 | 0.1563 | 0.6357 | 0.7973 |
| No log | 0.3391 | 138 | 0.6285 | 0.4111 | 0.6285 | 0.7928 |
| No log | 0.3440 | 140 | 0.7422 | 0.4357 | 0.7422 | 0.8615 |
| No log | 0.3489 | 142 | 0.7150 | 0.4322 | 0.7150 | 0.8456 |
| No log | 0.3538 | 144 | 0.6028 | 0.4091 | 0.6028 | 0.7764 |
| No log | 0.3587 | 146 | 0.6015 | 0.4225 | 0.6015 | 0.7756 |
| No log | 0.3636 | 148 | 0.6951 | 0.4823 | 0.6951 | 0.8337 |
| No log | 0.3686 | 150 | 0.7038 | 0.4990 | 0.7038 | 0.8389 |
| No log | 0.3735 | 152 | 0.5787 | 0.4695 | 0.5787 | 0.7607 |
| No log | 0.3784 | 154 | 0.6215 | 0.3352 | 0.6215 | 0.7884 |
| No log | 0.3833 | 156 | 0.6272 | 0.3477 | 0.6272 | 0.7919 |
| No log | 0.3882 | 158 | 0.5507 | 0.4780 | 0.5507 | 0.7421 |
| No log | 0.3931 | 160 | 0.5994 | 0.4818 | 0.5994 | 0.7742 |
| No log | 0.3980 | 162 | 0.5815 | 0.4971 | 0.5815 | 0.7626 |
| No log | 0.4029 | 164 | 0.5675 | 0.3627 | 0.5675 | 0.7533 |
| No log | 0.4079 | 166 | 0.5865 | 0.2939 | 0.5865 | 0.7659 |
| No log | 0.4128 | 168 | 0.5698 | 0.3939 | 0.5698 | 0.7548 |
| No log | 0.4177 | 170 | 0.6356 | 0.4899 | 0.6356 | 0.7973 |
| No log | 0.4226 | 172 | 0.6942 | 0.4900 | 0.6942 | 0.8332 |
| No log | 0.4275 | 174 | 0.6633 | 0.4815 | 0.6633 | 0.8144 |
| No log | 0.4324 | 176 | 0.5872 | 0.4197 | 0.5872 | 0.7663 |
| No log | 0.4373 | 178 | 0.6004 | 0.2276 | 0.6004 | 0.7748 |
| No log | 0.4423 | 180 | 0.6033 | 0.2297 | 0.6033 | 0.7767 |
| No log | 0.4472 | 182 | 0.5766 | 0.3970 | 0.5766 | 0.7593 |
| No log | 0.4521 | 184 | 0.6689 | 0.4717 | 0.6689 | 0.8178 |
| No log | 0.4570 | 186 | 0.7695 | 0.4042 | 0.7695 | 0.8772 |
| No log | 0.4619 | 188 | 0.7469 | 0.4181 | 0.7469 | 0.8642 |
| No log | 0.4668 | 190 | 0.6979 | 0.3625 | 0.6979 | 0.8354 |
| No log | 0.4717 | 192 | 0.7124 | 0.2142 | 0.7124 | 0.8441 |
| No log | 0.4767 | 194 | 0.7172 | 0.3972 | 0.7172 | 0.8469 |
| No log | 0.4816 | 196 | 0.7136 | 0.4752 | 0.7136 | 0.8447 |
| No log | 0.4865 | 198 | 0.7077 | 0.4783 | 0.7077 | 0.8413 |
| No log | 0.4914 | 200 | 0.7011 | 0.4889 | 0.7011 | 0.8373 |
| No log | 0.4963 | 202 | 0.6820 | 0.4918 | 0.6820 | 0.8258 |
| No log | 0.5012 | 204 | 0.6660 | 0.5004 | 0.6660 | 0.8161 |
| No log | 0.5061 | 206 | 0.6313 | 0.5193 | 0.6313 | 0.7945 |
| No log | 0.5111 | 208 | 0.6562 | 0.5317 | 0.6562 | 0.8101 |
| No log | 0.5160 | 210 | 0.5680 | 0.5665 | 0.5680 | 0.7537 |
| No log | 0.5209 | 212 | 0.5510 | 0.5565 | 0.5510 | 0.7423 |
| No log | 0.5258 | 214 | 0.5106 | 0.5486 | 0.5106 | 0.7146 |
| No log | 0.5307 | 216 | 0.5433 | 0.5795 | 0.5433 | 0.7371 |
| No log | 0.5356 | 218 | 0.4979 | 0.5820 | 0.4979 | 0.7056 |
| No log | 0.5405 | 220 | 0.4783 | 0.5050 | 0.4783 | 0.6916 |
| No log | 0.5455 | 222 | 0.4630 | 0.5287 | 0.4630 | 0.6805 |
| No log | 0.5504 | 224 | 0.4581 | 0.5551 | 0.4581 | 0.6768 |
| No log | 0.5553 | 226 | 0.5263 | 0.5927 | 0.5263 | 0.7255 |
| No log | 0.5602 | 228 | 0.7635 | 0.4351 | 0.7635 | 0.8738 |
| No log | 0.5651 | 230 | 1.0279 | 0.2025 | 1.0279 | 1.0138 |
| No log | 0.5700 | 232 | 1.0434 | 0.2820 | 1.0434 | 1.0215 |
| No log | 0.5749 | 234 | 0.8612 | 0.3846 | 0.8612 | 0.9280 |
| No log | 0.5799 | 236 | 0.7987 | 0.4225 | 0.7987 | 0.8937 |
| No log | 0.5848 | 238 | 0.8258 | 0.4022 | 0.8258 | 0.9087 |
| No log | 0.5897 | 240 | 0.7656 | 0.4263 | 0.7656 | 0.8750 |
| No log | 0.5946 | 242 | 0.7307 | 0.4419 | 0.7307 | 0.8548 |
| No log | 0.5995 | 244 | 0.7634 | 0.4449 | 0.7634 | 0.8737 |
| No log | 0.6044 | 246 | 0.6035 | 0.4980 | 0.6035 | 0.7769 |
| No log | 0.6093 | 248 | 0.5288 | 0.4402 | 0.5288 | 0.7272 |
| No log | 0.6143 | 250 | 0.5195 | 0.4752 | 0.5195 | 0.7207 |
| No log | 0.6192 | 252 | 0.5899 | 0.5062 | 0.5899 | 0.7681 |
| No log | 0.6241 | 254 | 0.6204 | 0.5011 | 0.6204 | 0.7877 |
| No log | 0.6290 | 256 | 0.7014 | 0.4740 | 0.7014 | 0.8375 |
| No log | 0.6339 | 258 | 0.6151 | 0.4904 | 0.6151 | 0.7843 |
| No log | 0.6388 | 260 | 0.5681 | 0.4732 | 0.5681 | 0.7537 |
| No log | 0.6437 | 262 | 0.5711 | 0.3029 | 0.5711 | 0.7557 |
| No log | 0.6486 | 264 | 0.5710 | 0.3919 | 0.5710 | 0.7557 |
| No log | 0.6536 | 266 | 0.5865 | 0.4336 | 0.5865 | 0.7658 |
| No log | 0.6585 | 268 | 0.5858 | 0.4150 | 0.5858 | 0.7654 |
| No log | 0.6634 | 270 | 0.5771 | 0.2926 | 0.5771 | 0.7597 |
| No log | 0.6683 | 272 | 0.5823 | 0.2582 | 0.5823 | 0.7631 |
| No log | 0.6732 | 274 | 0.5503 | 0.4403 | 0.5503 | 0.7418 |
| No log | 0.6781 | 276 | 0.6317 | 0.5141 | 0.6317 | 0.7948 |
| No log | 0.6830 | 278 | 0.6959 | 0.4922 | 0.6959 | 0.8342 |
| No log | 0.6880 | 280 | 0.6101 | 0.5248 | 0.6101 | 0.7811 |
| No log | 0.6929 | 282 | 0.5580 | 0.4842 | 0.5580 | 0.7470 |
| No log | 0.6978 | 284 | 0.5688 | 0.4833 | 0.5688 | 0.7542 |
| No log | 0.7027 | 286 | 0.6073 | 0.5096 | 0.6073 | 0.7793 |
| No log | 0.7076 | 288 | 0.6491 | 0.5226 | 0.6491 | 0.8057 |
| No log | 0.7125 | 290 | 0.6436 | 0.5091 | 0.6436 | 0.8023 |
| No log | 0.7174 | 292 | 0.6434 | 0.5084 | 0.6434 | 0.8021 |
| No log | 0.7224 | 294 | 0.5828 | 0.4337 | 0.5828 | 0.7634 |
| No log | 0.7273 | 296 | 0.5625 | 0.3556 | 0.5625 | 0.7500 |
| No log | 0.7322 | 298 | 0.5582 | 0.3241 | 0.5582 | 0.7471 |
| No log | 0.7371 | 300 | 0.5544 | 0.4767 | 0.5544 | 0.7446 |
| No log | 0.7420 | 302 | 0.6449 | 0.5024 | 0.6449 | 0.8031 |
| No log | 0.7469 | 304 | 0.6234 | 0.5138 | 0.6234 | 0.7896 |
| No log | 0.7518 | 306 | 0.5243 | 0.5019 | 0.5243 | 0.7241 |
| No log | 0.7568 | 308 | 0.5382 | 0.3475 | 0.5382 | 0.7336 |
| No log | 0.7617 | 310 | 0.5320 | 0.3510 | 0.5320 | 0.7294 |
| No log | 0.7666 | 312 | 0.4957 | 0.4885 | 0.4957 | 0.7040 |
| No log | 0.7715 | 314 | 0.5830 | 0.5293 | 0.5830 | 0.7635 |
| No log | 0.7764 | 316 | 0.5886 | 0.5480 | 0.5886 | 0.7672 |
| No log | 0.7813 | 318 | 0.4838 | 0.5468 | 0.4838 | 0.6956 |
| No log | 0.7862 | 320 | 0.4668 | 0.5205 | 0.4668 | 0.6832 |
| No log | 0.7912 | 322 | 0.4647 | 0.4996 | 0.4647 | 0.6817 |
| No log | 0.7961 | 324 | 0.4582 | 0.5328 | 0.4582 | 0.6769 |
| No log | 0.8010 | 326 | 0.4574 | 0.5561 | 0.4574 | 0.6763 |
| No log | 0.8059 | 328 | 0.4591 | 0.5546 | 0.4591 | 0.6775 |
| No log | 0.8108 | 330 | 0.4420 | 0.5515 | 0.4420 | 0.6648 |
| No log | 0.8157 | 332 | 0.4386 | 0.5533 | 0.4386 | 0.6623 |
| No log | 0.8206 | 334 | 0.4409 | 0.5459 | 0.4409 | 0.6640 |
| No log | 0.8256 | 336 | 0.4340 | 0.5524 | 0.4340 | 0.6588 |
| No log | 0.8305 | 338 | 0.4475 | 0.5581 | 0.4475 | 0.6689 |
| No log | 0.8354 | 340 | 0.4293 | 0.5655 | 0.4293 | 0.6552 |
| No log | 0.8403 | 342 | 0.4330 | 0.5695 | 0.4330 | 0.6580 |
| No log | 0.8452 | 344 | 0.4234 | 0.5587 | 0.4234 | 0.6507 |
| No log | 0.8501 | 346 | 0.4824 | 0.5736 | 0.4824 | 0.6945 |
| No log | 0.8550 | 348 | 0.5140 | 0.5911 | 0.5140 | 0.7169 |
| No log | 0.8600 | 350 | 0.4262 | 0.5602 | 0.4262 | 0.6529 |
| No log | 0.8649 | 352 | 0.4381 | 0.5275 | 0.4381 | 0.6619 |
| No log | 0.8698 | 354 | 0.4407 | 0.5713 | 0.4407 | 0.6639 |
| No log | 0.8747 | 356 | 0.6305 | 0.5876 | 0.6305 | 0.7940 |
| No log | 0.8796 | 358 | 0.7397 | 0.5399 | 0.7397 | 0.8601 |
| No log | 0.8845 | 360 | 0.5972 | 0.5745 | 0.5972 | 0.7728 |
| No log | 0.8894 | 362 | 0.4624 | 0.5444 | 0.4624 | 0.6800 |
| No log | 0.8943 | 364 | 0.4427 | 0.5714 | 0.4427 | 0.6654 |
| No log | 0.8993 | 366 | 0.4513 | 0.5967 | 0.4513 | 0.6718 |
| No log | 0.9042 | 368 | 0.5772 | 0.5873 | 0.5772 | 0.7597 |
| No log | 0.9091 | 370 | 0.6064 | 0.6086 | 0.6064 | 0.7787 |
| No log | 0.9140 | 372 | 0.4612 | 0.6155 | 0.4612 | 0.6791 |
| No log | 0.9189 | 374 | 0.4125 | 0.5595 | 0.4125 | 0.6423 |
| No log | 0.9238 | 376 | 0.4153 | 0.5622 | 0.4153 | 0.6445 |
| No log | 0.9287 | 378 | 0.4368 | 0.5968 | 0.4368 | 0.6609 |
| No log | 0.9337 | 380 | 0.4642 | 0.6211 | 0.4642 | 0.6813 |
| No log | 0.9386 | 382 | 0.4825 | 0.6245 | 0.4825 | 0.6946 |
| No log | 0.9435 | 384 | 0.4562 | 0.6044 | 0.4562 | 0.6755 |
| No log | 0.9484 | 386 | 0.4663 | 0.6003 | 0.4663 | 0.6828 |
| No log | 0.9533 | 388 | 0.5363 | 0.6060 | 0.5363 | 0.7323 |
| No log | 0.9582 | 390 | 0.7487 | 0.5385 | 0.7487 | 0.8653 |
| No log | 0.9631 | 392 | 0.7755 | 0.5165 | 0.7755 | 0.8806 |
| No log | 0.9681 | 394 | 0.6010 | 0.5651 | 0.6010 | 0.7753 |
| No log | 0.9730 | 396 | 0.5072 | 0.5756 | 0.5072 | 0.7122 |
| No log | 0.9779 | 398 | 0.5508 | 0.5799 | 0.5508 | 0.7422 |
| No log | 0.9828 | 400 | 0.6093 | 0.5552 | 0.6093 | 0.7806 |
| No log | 0.9877 | 402 | 0.7580 | 0.5384 | 0.7580 | 0.8706 |
| No log | 0.9926 | 404 | 0.7525 | 0.5377 | 0.7525 | 0.8675 |
| No log | 0.9975 | 406 | 0.6594 | 0.5489 | 0.6594 | 0.8120 |
| No log | 1.0025 | 408 | 0.6561 | 0.5508 | 0.6561 | 0.8100 |
| No log | 1.0074 | 410 | 0.5611 | 0.5819 | 0.5611 | 0.7490 |
| No log | 1.0123 | 412 | 0.5213 | 0.5625 | 0.5213 | 0.7220 |
| No log | 1.0172 | 414 | 0.5723 | 0.5771 | 0.5723 | 0.7565 |
| No log | 1.0221 | 416 | 0.5687 | 0.5930 | 0.5687 | 0.7541 |
| No log | 1.0270 | 418 | 0.4838 | 0.6001 | 0.4838 | 0.6956 |
| No log | 1.0319 | 420 | 0.4607 | 0.6038 | 0.4607 | 0.6788 |
| No log | 1.0369 | 422 | 0.4615 | 0.6063 | 0.4615 | 0.6794 |
| No log | 1.0418 | 424 | 0.4450 | 0.5948 | 0.4450 | 0.6671 |
| No log | 1.0467 | 426 | 0.4441 | 0.6054 | 0.4441 | 0.6664 |
| No log | 1.0516 | 428 | 0.4670 | 0.6159 | 0.4670 | 0.6834 |
| No log | 1.0565 | 430 | 0.5026 | 0.6090 | 0.5026 | 0.7090 |
| No log | 1.0614 | 432 | 0.4743 | 0.5952 | 0.4743 | 0.6887 |
| No log | 1.0663 | 434 | 0.4243 | 0.5902 | 0.4243 | 0.6514 |
| No log | 1.0713 | 436 | 0.4351 | 0.5888 | 0.4351 | 0.6596 |
| No log | 1.0762 | 438 | 0.4700 | 0.6032 | 0.4700 | 0.6855 |
| No log | 1.0811 | 440 | 0.4343 | 0.5845 | 0.4343 | 0.6590 |
| No log | 1.0860 | 442 | 0.4483 | 0.5462 | 0.4483 | 0.6696 |
| No log | 1.0909 | 444 | 0.4533 | 0.5365 | 0.4533 | 0.6733 |
| No log | 1.0958 | 446 | 0.4362 | 0.5746 | 0.4362 | 0.6605 |
| No log | 1.1007 | 448 | 0.4928 | 0.5906 | 0.4928 | 0.7020 |
| No log | 1.1057 | 450 | 0.5399 | 0.6173 | 0.5399 | 0.7348 |
| No log | 1.1106 | 452 | 0.4619 | 0.5991 | 0.4619 | 0.6797 |
| No log | 1.1155 | 454 | 0.4252 | 0.5727 | 0.4252 | 0.6521 |
| No log | 1.1204 | 456 | 0.4228 | 0.5741 | 0.4228 | 0.6502 |
| No log | 1.1253 | 458 | 0.4567 | 0.6257 | 0.4567 | 0.6758 |
| No log | 1.1302 | 460 | 0.6388 | 0.6334 | 0.6388 | 0.7992 |
| No log | 1.1351 | 462 | 0.6192 | 0.6482 | 0.6192 | 0.7869 |
| No log | 1.1400 | 464 | 0.4595 | 0.6285 | 0.4595 | 0.6778 |
| No log | 1.1450 | 466 | 0.4330 | 0.5952 | 0.4330 | 0.6580 |
| No log | 1.1499 | 468 | 0.4991 | 0.6471 | 0.4991 | 0.7065 |
| No log | 1.1548 | 470 | 0.6608 | 0.7030 | 0.6608 | 0.8129 |
| No log | 1.1597 | 472 | 0.5729 | 0.6976 | 0.5729 | 0.7569 |
| No log | 1.1646 | 474 | 0.4662 | 0.6349 | 0.4662 | 0.6828 |
| No log | 1.1695 | 476 | 0.4311 | 0.6056 | 0.4311 | 0.6566 |
| No log | 1.1744 | 478 | 0.4604 | 0.6280 | 0.4604 | 0.6786 |
| No log | 1.1794 | 480 | 0.5520 | 0.6610 | 0.5520 | 0.7430 |
| No log | 1.1843 | 482 | 0.5067 | 0.6294 | 0.5067 | 0.7118 |
| No log | 1.1892 | 484 | 0.4372 | 0.5604 | 0.4372 | 0.6612 |
| No log | 1.1941 | 486 | 0.4510 | 0.4927 | 0.4510 | 0.6716 |
| No log | 1.1990 | 488 | 0.4446 | 0.4944 | 0.4446 | 0.6668 |
| No log | 1.2039 | 490 | 0.4548 | 0.5763 | 0.4548 | 0.6744 |
| No log | 1.2088 | 492 | 0.4975 | 0.6070 | 0.4975 | 0.7053 |
| No log | 1.2138 | 494 | 0.5297 | 0.6055 | 0.5297 | 0.7278 |
| No log | 1.2187 | 496 | 0.5612 | 0.6027 | 0.5612 | 0.7492 |
| No log | 1.2236 | 498 | 0.4947 | 0.5789 | 0.4947 | 0.7034 |
| 0.5107 | 1.2285 | 500 | 0.4709 | 0.5476 | 0.4709 | 0.6862 |
| 0.5107 | 1.2334 | 502 | 0.4801 | 0.5732 | 0.4801 | 0.6929 |
| 0.5107 | 1.2383 | 504 | 0.5205 | 0.5463 | 0.5205 | 0.7215 |
| 0.5107 | 1.2432 | 506 | 0.6151 | 0.5699 | 0.6151 | 0.7843 |
| 0.5107 | 1.2482 | 508 | 0.5700 | 0.5693 | 0.5700 | 0.7550 |
| 0.5107 | 1.2531 | 510 | 0.4834 | 0.5265 | 0.4834 | 0.6953 |
| 0.5107 | 1.2580 | 512 | 0.4777 | 0.5232 | 0.4777 | 0.6912 |
| 0.5107 | 1.2629 | 514 | 0.5004 | 0.5599 | 0.5004 | 0.7074 |
| 0.5107 | 1.2678 | 516 | 0.6491 | 0.5823 | 0.6491 | 0.8056 |
| 0.5107 | 1.2727 | 518 | 0.7351 | 0.6037 | 0.7351 | 0.8574 |
| 0.5107 | 1.2776 | 520 | 0.5979 | 0.5796 | 0.5979 | 0.7733 |
| 0.5107 | 1.2826 | 522 | 0.4755 | 0.5737 | 0.4755 | 0.6896 |
| 0.5107 | 1.2875 | 524 | 0.4747 | 0.4636 | 0.4747 | 0.6890 |
| 0.5107 | 1.2924 | 526 | 0.4686 | 0.4788 | 0.4686 | 0.6845 |
| 0.5107 | 1.2973 | 528 | 0.4581 | 0.5544 | 0.4581 | 0.6768 |
| 0.5107 | 1.3022 | 530 | 0.5497 | 0.6133 | 0.5497 | 0.7414 |
| 0.5107 | 1.3071 | 532 | 0.5933 | 0.6177 | 0.5933 | 0.7703 |
| 0.5107 | 1.3120 | 534 | 0.4957 | 0.5906 | 0.4957 | 0.7041 |
| 0.5107 | 1.3170 | 536 | 0.4449 | 0.5474 | 0.4449 | 0.6670 |
| 0.5107 | 1.3219 | 538 | 0.4461 | 0.5397 | 0.4461 | 0.6679 |
| 0.5107 | 1.3268 | 540 | 0.4911 | 0.5874 | 0.4911 | 0.7008 |
| 0.5107 | 1.3317 | 542 | 0.5566 | 0.6092 | 0.5566 | 0.7461 |
| 0.5107 | 1.3366 | 544 | 0.6142 | 0.5899 | 0.6142 | 0.7837 |
| 0.5107 | 1.3415 | 546 | 0.5344 | 0.5407 | 0.5344 | 0.7310 |
| 0.5107 | 1.3464 | 548 | 0.5157 | 0.4574 | 0.5157 | 0.7181 |
| 0.5107 | 1.3514 | 550 | 0.5250 | 0.4391 | 0.5250 | 0.7246 |
| 0.5107 | 1.3563 | 552 | 0.5342 | 0.4987 | 0.5342 | 0.7309 |
| 0.5107 | 1.3612 | 554 | 0.5742 | 0.5276 | 0.5742 | 0.7578 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
BananaPancake76/Camembert | BananaPancake76 | 2024-11-06T17:29:41Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T17:19:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glif-loradex-trainer/insectagon_mugshot_prodigy | glif-loradex-trainer | 2024-11-06T17:15:08Z | 411 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-06T17:14:14Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730913086401__000003000_0.jpg
text: A cartoon Jedi with green lightsaber [mug$hot]
- output:
url: samples/1730913110160__000003000_1.jpg
text: A lion roaring [mug$hot]
- output:
url: samples/1730913133582__000003000_2.jpg
text: AN ACTION SCENE [mug$hot]
- output:
url: samples/1730913157899__000003000_3.jpg
text: A woman holding a cartoon CAT [mug$hot]
- output:
url: samples/1730913181589__000003000_4.jpg
text: THE JOKER [mug$hot]
- output:
url: samples/1730913205023__000003000_5.jpg
text: BATMAN cartoon IN GOTHAM [mug$hot]
- output:
url: samples/1730913228546__000003000_6.jpg
text: a blue Teddy bear Kaiju vs Godzilla [mug$hot]
base_model: black-forest-labs/FLUX.1-dev
trigger: mug$hot
instance_prompt: mug$hot
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mugshot_prodigy
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `insectagon`.
<Gallery />
## Trigger words
You should use `mug$hot` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/insectagon_mugshot_prodigy/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
migtissera/Tess-R1-Limerick-Llama-3.1-70B | migtissera | 2024-11-06T17:12:38Z | 15 | 20 | null | [
"pytorch",
"llama",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"region:us"
] | null | 2024-11-03T18:56:28Z | ---
license: llama3.1
base_model: meta-llama/Llama-3.1-70B
model-index:
- name: Tess-R1-Llama-3.1-70B
results: []
---
# Tess-R1 Limerick (Llama-3.1-70B)

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# Introduction
Welcome to the Tess-Reasoning-1 (Tess-R1) series of models. Tess-R1 is designed with test-time compute in mind, and has the capabilities to produce a Chain-of-Thought (CoT) reasoning before producing the final output.
The model is trained to first think step-by-step, and contemplate on its answers. It can also write alternatives after contemplating. Once all the steps have been thought through, it writes the final output.
1. Step-by-step, Chain-of-Thought thinking process. Uses `<thinking>` `</thinking>` tags to indicate when the model is performing CoT.
2. `<contemplation>` `</contemplation>` tags are used when the model contemplate on its answers.
3. `<alternatively>` `</alternatively>` tags are used for alternate suggestions.
4. Finally, `<output>` `</output>` tags are used for the final output
## Important Note:
In a multi-turn conversation, only the contents between the `<output>` `</output>` tags (discarding the tags) should be carried forward. Otherwise the model will see out of distribution input data and will fail.
The model was trained mostly with Chain-of-Thought reasoning data, including the XML tags. However, to generalize model generations, some single-turn and multi-turn data without XML tags were also included. Due to this, in some instances the model does not produce XML tags and does not fully utilize test-time compute capabilities. There is two ways to get around this:
- Include a try/catch statement in your inference script, and only pass on the contents between the `<output>` `</output>` tags if it's available.
- Use the `<thinking>` tag as the seed in the generation, and force the model to produce outputs with XML tags. i.e: `f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n<thinking>"`
# Prompt Format
The model uses Llama3 prompt format.
# System Message
The system message *must* be the following:
```You are Tess-R1, an advanced AI that was created for complex reasoning. Given a user query, you are able to first create a Chain-of-Thought (CoT) reasoning. Once the CoT is devised, you then proceed to first think about how to answer. While doing this, you have the capability to contemplate on the thought, and also provide alternatives. Once the CoT steps have been thought through, you then respond by creating the final output.```
# Evaluations
Since the model is trained to use test-time-compute, the evalutations were performed by first setting the system message, and then extracting the contents between the `<output>` `</output>` tags. Only the contents between the tags were then used for the evaluations.
| | Tess-R1 Limerick | Claude 3.5 Haiku | GPT-4o mini |
|--------------|------------------|------------------|-------------|
| GPQA | 41.5% | 41.6% | 40.2% |
| MMLU | 81.6% | - | 82.0% |
| MATH | 64.2% | 69.4% | 70.2% |
| MMLU-Pro | 65.6% | 65.0% | - |
| HumanEval | 61.0% | 88.1% | 87.2% |
The evaluations were performed using a fork of Glaive's `simple-evals` codebase. Many thanks to @winglian for performing the evals. The codebase for evaluations can be found here: https://github.com/winglian/simple-evals
Example to run evaluations:
`python run_reflection_eval.py tess_r1_70b --evals gpqa mmlu math`
The system message have been edited in the sampler to reflect Tess-R1's system prompt.
# Inference
I have included a sample Python script below. This script uses a try/catch statement to carry forward the model generations in a multi-turn conversation.
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
import re
class LLM(object):
def __init__(self, model_path):
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
load_in_4bit=False,
trust_remote_code=False,
)
self.tokenizer = AutoTokenizer.from_pretrained(
model_path, trust_remote_code=False
)
self.terminators = [
self.tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
self.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
]
def generate_text(self, instruction):
tokens = self.tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 4096,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = self.model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=self.tokenizer.eos_token_id,
eos_token_id=self.terminators,
)
output = rest[0][length:]
string = self.tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
def extract_output(self, text):
pattern = r"<output>(.*?)</output>"
match = re.search(pattern, text, re.DOTALL)
content = match.group(1).strip()
return content
def respond_llama3(self, user_prompt):
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are Tess-R1, an advanced AI that was created for complex reasoning. Given a user query, you are able to first create a Chain-of-Thought (CoT) reasoning. Once the CoT is devised, you then proceed to first think about how to answer. While doing this, you have the capability to contemplate on the thought, and also provide alternatives. Once the CoT steps have been thought through, you then respond by creating the final output.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = self.generate_text(llm_prompt)
try:
answer_output = self.extract_output(answer)
return answer_output
except:
return answer
model_path = "neurolattice/Tess-R1-Llama-3.1-70B"
llm = LLM(model_path)
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are Tess-R1, an advanced AI that was created for complex reasoning. Given a user query, you are able to first create a Chain-of-Thought (CoT) reasoning. Once the CoT is devised, you then proceed to first think about how to answer. While doing this, you have the capability to contemplate on the thought, and also provide alternatives. Once the CoT steps have been thought through, you then respond by creating the final output.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = llm.generate_text(llm_prompt)
print("=" * 132)
print(answer)
try:
answer_output = llm.extract_output(answer)
print("=" * 132)
print(answer_output)
conversation = f"{llm_prompt}{answer_output}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
except:
conversation = f"{llm_prompt}{answer}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
``` |
slounaci/model_td3 | slounaci | 2024-11-06T17:11:29Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T17:01:25Z | ---
library_name: transformers
license: mit
base_model: almanach/camembert-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: model_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_3
This model is a fine-tuned version of [almanach/camembert-base](https://huggingface.co/almanach/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 16 | 0.0276 | 0.0 | 0.0 | 0.0 | 0.9970 |
| No log | 2.0 | 32 | 0.0292 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 3.0 | 48 | 0.0265 | 0.0 | 0.0 | 0.0 | 0.9970 |
| No log | 4.0 | 64 | 0.0256 | 0.0 | 0.0 | 0.0 | 0.9970 |
| No log | 5.0 | 80 | 0.0253 | 0.0 | 0.0 | 0.0 | 0.9970 |
| No log | 6.0 | 96 | 0.0230 | 0.0 | 0.0 | 0.0 | 0.9976 |
| No log | 7.0 | 112 | 0.0226 | 0.0 | 0.0 | 0.0 | 0.9976 |
| No log | 8.0 | 128 | 0.0224 | 0.0 | 0.0 | 0.0 | 0.9976 |
| No log | 9.0 | 144 | 0.0225 | 0.0 | 0.0 | 0.0 | 0.9976 |
| No log | 10.0 | 160 | 0.0223 | 0.0 | 0.0 | 0.0 | 0.9976 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf | RichardErkhov | 2024-11-06T17:10:25Z | 6 | 0 | null | [
"gguf",
"arxiv:2306.01708",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T15:58:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MFANN-phigments-slerp-V2 - GGUF
- Model creator: https://huggingface.co/netcat420/
- Original model: https://huggingface.co/netcat420/MFANN-phigments-slerp-V2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MFANN-phigments-slerp-V2.Q2_K.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q2_K.gguf) | Q2_K | 1.03GB |
| [MFANN-phigments-slerp-V2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [MFANN-phigments-slerp-V2.Q3_K.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q3_K.gguf) | Q3_K | 1.33GB |
| [MFANN-phigments-slerp-V2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q3_K_M.gguf) | Q3_K_M | 1.33GB |
| [MFANN-phigments-slerp-V2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q3_K_L.gguf) | Q3_K_L | 1.47GB |
| [MFANN-phigments-slerp-V2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [MFANN-phigments-slerp-V2.Q4_0.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q4_0.gguf) | Q4_0 | 1.49GB |
| [MFANN-phigments-slerp-V2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [MFANN-phigments-slerp-V2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q4_K_S.gguf) | Q4_K_S | 1.51GB |
| [MFANN-phigments-slerp-V2.Q4_K.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q4_K.gguf) | Q4_K | 1.62GB |
| [MFANN-phigments-slerp-V2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q4_K_M.gguf) | Q4_K_M | 1.62GB |
| [MFANN-phigments-slerp-V2.Q4_1.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q4_1.gguf) | Q4_1 | 1.65GB |
| [MFANN-phigments-slerp-V2.Q5_0.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q5_0.gguf) | Q5_0 | 1.8GB |
| [MFANN-phigments-slerp-V2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [MFANN-phigments-slerp-V2.Q5_K.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q5_K.gguf) | Q5_K | 1.87GB |
| [MFANN-phigments-slerp-V2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q5_K_M.gguf) | Q5_K_M | 1.87GB |
| [MFANN-phigments-slerp-V2.Q5_1.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q5_1.gguf) | Q5_1 | 1.95GB |
| [MFANN-phigments-slerp-V2.Q6_K.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q6_K.gguf) | Q6_K | 2.13GB |
| [MFANN-phigments-slerp-V2.Q8_0.gguf](https://huggingface.co/RichardErkhov/netcat420_-_MFANN-phigments-slerp-V2-gguf/blob/main/MFANN-phigments-slerp-V2.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
base_model:
- netcat420/MFANN-Phigments12-slerp
- liminerity/Phigments12
- netcat420/MFANN-phigments-slerp-1a
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANN-Phigments12-slerp](https://huggingface.co/netcat420/MFANN-Phigments12-slerp)
* [netcat420/MFANN-phigments-slerp-1a](https://huggingface.co/netcat420/MFANN-phigments-slerp-1a)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: liminerity/Phigments12
# no parameters necessary for base model
- model: netcat420/MFANN-phigments-slerp-1a
parameters:
density: 1
weight: 1
- model: netcat420/MFANN-Phigments12-slerp
parameters:
density: 1
weight: 1
merge_method: ties
base_model: liminerity/Phigments12
parameters:
normalize: true
dtype: float16
```
|
jiawei1018/openmathinstruct2-llama-3.1-8B-Instruct-lr7-ep1 | jiawei1018 | 2024-11-06T17:05:26Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T16:25:26Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openmathinstruct2-llama-3.1-8B-Instruct-lr7-ep1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openmathinstruct2-llama-3.1-8B-Instruct-lr7-ep1
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the openmathinstruct2_cot_20k_train dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
|
michecosta/food_mic | michecosta | 2024-11-06T17:05:18Z | 25 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"food-photography",
"photorealistic",
"base_model:SG161222/Realistic_Vision_V2.0",
"base_model:adapter:SG161222/Realistic_Vision_V2.0",
"license:openrail",
"region:us"
] | text-to-image | 2024-11-06T17:01:40Z | ---
license: openrail
base_model: "SG161222/Realistic_Vision_V2.0"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- food-photography
- photorealistic
---
# Gourmet Food Photography LORA
A photorealistic LORA model trained on professional food photography. Specialized in generating high-end culinary presentations with perfect lighting, depth of field, and intricate food details.
## Training Details
- Base Model: Realistic Vision V2.0
- Network Rank: 48
- Training Steps: 2000
- Learning Rate: 0.0004
- Training Images: 30
## Usage Tips
Best results with trigger words: "gourmet plating", "food photography", "culinary presentation"
## Example Prompts
"(RAW photo, photorealistic:1.2), gourmet plating, professional food photography, soft natural lighting, shallow depth of field, marble surface, garnished dish, fresh ingredients, bokeh background, 8k uhd, high detail"
Negative prompt: "artificial looking, oversaturated, cartoon food, plastic looking, blurry, low quality, dark shadows, overexposed"
## Recommended Settings
- CFG Scale: 7-8
- Sampler: DPM++ 2M Karras
- Steps: 25-30
|
MayBashendy/ASAP_FineTuningBERT_Aug_k25_task1_organization_fold0 | MayBashendy | 2024-11-06T16:54:53Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T16:21:43Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k25_task1_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k25_task1_organization_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4564
- Qwk: 0.5184
- Mse: 0.4564
- Rmse: 0.6756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0051 | 2 | 9.7197 | 0.0 | 9.7197 | 3.1176 |
| No log | 0.0103 | 4 | 8.2513 | 0.0137 | 8.2513 | 2.8725 |
| No log | 0.0154 | 6 | 7.2426 | 0.0054 | 7.2426 | 2.6912 |
| No log | 0.0206 | 8 | 6.5605 | 0.0018 | 6.5605 | 2.5614 |
| No log | 0.0257 | 10 | 5.7007 | 0.0 | 5.7007 | 2.3876 |
| No log | 0.0308 | 12 | 4.8602 | 0.0 | 4.8602 | 2.2046 |
| No log | 0.0360 | 14 | 4.0732 | 0.0312 | 4.0732 | 2.0182 |
| No log | 0.0411 | 16 | 3.3232 | 0.0150 | 3.3232 | 1.8230 |
| No log | 0.0463 | 18 | 2.6070 | 0.0115 | 2.6070 | 1.6146 |
| No log | 0.0514 | 20 | 2.0523 | 0.0077 | 2.0523 | 1.4326 |
| No log | 0.0566 | 22 | 1.5454 | 0.0077 | 1.5454 | 1.2431 |
| No log | 0.0617 | 24 | 1.2206 | 0.0976 | 1.2206 | 1.1048 |
| No log | 0.0668 | 26 | 0.9968 | 0.0484 | 0.9968 | 0.9984 |
| No log | 0.0720 | 28 | 0.8412 | 0.0316 | 0.8412 | 0.9172 |
| No log | 0.0771 | 30 | 0.7720 | 0.0316 | 0.7720 | 0.8786 |
| No log | 0.0823 | 32 | 0.7483 | 0.0316 | 0.7483 | 0.8650 |
| No log | 0.0874 | 34 | 0.7221 | 0.0316 | 0.7221 | 0.8498 |
| No log | 0.0925 | 36 | 0.6818 | 0.0679 | 0.6818 | 0.8257 |
| No log | 0.0977 | 38 | 0.6879 | 0.2823 | 0.6879 | 0.8294 |
| No log | 0.1028 | 40 | 0.7261 | 0.4454 | 0.7261 | 0.8521 |
| No log | 0.1080 | 42 | 0.6917 | 0.0316 | 0.6917 | 0.8317 |
| No log | 0.1131 | 44 | 0.7032 | 0.0316 | 0.7032 | 0.8386 |
| No log | 0.1183 | 46 | 0.6912 | 0.0409 | 0.6912 | 0.8314 |
| No log | 0.1234 | 48 | 0.7595 | 0.1049 | 0.7595 | 0.8715 |
| No log | 0.1285 | 50 | 0.7885 | 0.0316 | 0.7885 | 0.8880 |
| No log | 0.1337 | 52 | 0.8640 | 0.0316 | 0.8640 | 0.9295 |
| No log | 0.1388 | 54 | 0.8273 | 0.0316 | 0.8273 | 0.9096 |
| No log | 0.1440 | 56 | 0.7669 | 0.0316 | 0.7669 | 0.8757 |
| No log | 0.1491 | 58 | 0.7488 | 0.0316 | 0.7488 | 0.8653 |
| No log | 0.1542 | 60 | 0.7392 | 0.0316 | 0.7392 | 0.8598 |
| No log | 0.1594 | 62 | 0.8688 | 0.0316 | 0.8688 | 0.9321 |
| No log | 0.1645 | 64 | 0.7803 | 0.0316 | 0.7803 | 0.8833 |
| No log | 0.1697 | 66 | 0.7587 | 0.0917 | 0.7587 | 0.8710 |
| No log | 0.1748 | 68 | 0.8153 | 0.1893 | 0.8153 | 0.9029 |
| No log | 0.1799 | 70 | 0.8050 | 0.0106 | 0.8050 | 0.8972 |
| No log | 0.1851 | 72 | 0.7948 | 0.0106 | 0.7948 | 0.8915 |
| No log | 0.1902 | 74 | 0.8814 | 0.0106 | 0.8814 | 0.9388 |
| No log | 0.1954 | 76 | 0.9010 | 0.0106 | 0.9010 | 0.9492 |
| No log | 0.2005 | 78 | 0.8501 | 0.0106 | 0.8501 | 0.9220 |
| No log | 0.2057 | 80 | 0.8026 | 0.0106 | 0.8026 | 0.8959 |
| No log | 0.2108 | 82 | 0.7781 | 0.0212 | 0.7781 | 0.8821 |
| No log | 0.2159 | 84 | 0.7539 | 0.0212 | 0.7539 | 0.8683 |
| No log | 0.2211 | 86 | 0.7138 | 0.0212 | 0.7138 | 0.8449 |
| No log | 0.2262 | 88 | 0.6840 | 0.0316 | 0.6840 | 0.8270 |
| No log | 0.2314 | 90 | 0.6716 | 0.0382 | 0.6716 | 0.8195 |
| No log | 0.2365 | 92 | 0.7469 | 0.1138 | 0.7469 | 0.8642 |
| No log | 0.2416 | 94 | 0.6904 | 0.0989 | 0.6904 | 0.8309 |
| No log | 0.2468 | 96 | 0.6409 | 0.0965 | 0.6409 | 0.8006 |
| No log | 0.2519 | 98 | 0.5906 | 0.1657 | 0.5906 | 0.7685 |
| No log | 0.2571 | 100 | 0.5836 | 0.3276 | 0.5836 | 0.7640 |
| No log | 0.2622 | 102 | 0.5813 | 0.2713 | 0.5813 | 0.7624 |
| No log | 0.2674 | 104 | 0.6539 | 0.2029 | 0.6539 | 0.8086 |
| No log | 0.2725 | 106 | 0.6165 | 0.1761 | 0.6165 | 0.7852 |
| No log | 0.2776 | 108 | 0.6649 | 0.2300 | 0.6649 | 0.8154 |
| No log | 0.2828 | 110 | 0.5761 | 0.3670 | 0.5761 | 0.7590 |
| No log | 0.2879 | 112 | 0.6440 | 0.2348 | 0.6440 | 0.8025 |
| No log | 0.2931 | 114 | 0.5790 | 0.3560 | 0.5790 | 0.7609 |
| No log | 0.2982 | 116 | 0.5972 | 0.4462 | 0.5972 | 0.7728 |
| No log | 0.3033 | 118 | 0.5890 | 0.4195 | 0.5890 | 0.7674 |
| No log | 0.3085 | 120 | 0.6041 | 0.4154 | 0.6041 | 0.7773 |
| No log | 0.3136 | 122 | 0.6236 | 0.4118 | 0.6236 | 0.7897 |
| No log | 0.3188 | 124 | 0.6326 | 0.3870 | 0.6326 | 0.7954 |
| No log | 0.3239 | 126 | 0.6408 | 0.3804 | 0.6408 | 0.8005 |
| No log | 0.3290 | 128 | 0.6351 | 0.2678 | 0.6351 | 0.7969 |
| No log | 0.3342 | 130 | 0.6273 | 0.2938 | 0.6273 | 0.7921 |
| No log | 0.3393 | 132 | 0.6098 | 0.3983 | 0.6098 | 0.7809 |
| No log | 0.3445 | 134 | 0.5610 | 0.3395 | 0.5610 | 0.7490 |
| No log | 0.3496 | 136 | 0.5803 | 0.2881 | 0.5803 | 0.7618 |
| No log | 0.3548 | 138 | 0.6078 | 0.2697 | 0.6078 | 0.7796 |
| No log | 0.3599 | 140 | 0.5420 | 0.3548 | 0.5420 | 0.7362 |
| No log | 0.3650 | 142 | 0.6176 | 0.4748 | 0.6176 | 0.7859 |
| No log | 0.3702 | 144 | 0.6974 | 0.4289 | 0.6974 | 0.8351 |
| No log | 0.3753 | 146 | 0.6100 | 0.4760 | 0.6100 | 0.7810 |
| No log | 0.3805 | 148 | 0.5638 | 0.3984 | 0.5638 | 0.7509 |
| No log | 0.3856 | 150 | 0.5885 | 0.3725 | 0.5885 | 0.7672 |
| No log | 0.3907 | 152 | 0.6188 | 0.4020 | 0.6188 | 0.7866 |
| No log | 0.3959 | 154 | 0.6011 | 0.4209 | 0.6011 | 0.7753 |
| No log | 0.4010 | 156 | 0.5802 | 0.3927 | 0.5802 | 0.7617 |
| No log | 0.4062 | 158 | 0.6003 | 0.2077 | 0.6003 | 0.7748 |
| No log | 0.4113 | 160 | 0.6117 | 0.1512 | 0.6117 | 0.7821 |
| No log | 0.4165 | 162 | 0.5686 | 0.3428 | 0.5686 | 0.7541 |
| No log | 0.4216 | 164 | 0.5838 | 0.4219 | 0.5838 | 0.7641 |
| No log | 0.4267 | 166 | 0.5672 | 0.2763 | 0.5672 | 0.7531 |
| No log | 0.4319 | 168 | 0.6833 | 0.1056 | 0.6833 | 0.8266 |
| No log | 0.4370 | 170 | 0.6518 | 0.1132 | 0.6518 | 0.8074 |
| No log | 0.4422 | 172 | 0.5972 | 0.1976 | 0.5972 | 0.7728 |
| No log | 0.4473 | 174 | 0.5658 | 0.2990 | 0.5658 | 0.7522 |
| No log | 0.4524 | 176 | 0.5975 | 0.4283 | 0.5975 | 0.7730 |
| No log | 0.4576 | 178 | 0.5976 | 0.4283 | 0.5976 | 0.7731 |
| No log | 0.4627 | 180 | 0.5860 | 0.3942 | 0.5860 | 0.7655 |
| No log | 0.4679 | 182 | 0.5564 | 0.3634 | 0.5564 | 0.7459 |
| No log | 0.4730 | 184 | 0.5481 | 0.3261 | 0.5481 | 0.7404 |
| No log | 0.4781 | 186 | 0.5404 | 0.3953 | 0.5404 | 0.7351 |
| No log | 0.4833 | 188 | 0.6461 | 0.4499 | 0.6461 | 0.8038 |
| No log | 0.4884 | 190 | 0.6761 | 0.4304 | 0.6761 | 0.8222 |
| No log | 0.4936 | 192 | 0.5535 | 0.4554 | 0.5535 | 0.7440 |
| No log | 0.4987 | 194 | 0.5418 | 0.3669 | 0.5418 | 0.7361 |
| No log | 0.5039 | 196 | 0.5403 | 0.3481 | 0.5403 | 0.7350 |
| No log | 0.5090 | 198 | 0.5639 | 0.4450 | 0.5639 | 0.7509 |
| No log | 0.5141 | 200 | 0.5816 | 0.4289 | 0.5816 | 0.7626 |
| No log | 0.5193 | 202 | 0.5499 | 0.4539 | 0.5499 | 0.7416 |
| No log | 0.5244 | 204 | 0.5273 | 0.3763 | 0.5273 | 0.7261 |
| No log | 0.5296 | 206 | 0.5654 | 0.2645 | 0.5654 | 0.7519 |
| No log | 0.5347 | 208 | 0.5674 | 0.2675 | 0.5674 | 0.7532 |
| No log | 0.5398 | 210 | 0.5249 | 0.3926 | 0.5249 | 0.7245 |
| No log | 0.5450 | 212 | 0.5320 | 0.4558 | 0.5320 | 0.7294 |
| No log | 0.5501 | 214 | 0.5117 | 0.3944 | 0.5117 | 0.7154 |
| No log | 0.5553 | 216 | 0.5569 | 0.3028 | 0.5569 | 0.7462 |
| No log | 0.5604 | 218 | 0.5266 | 0.3504 | 0.5266 | 0.7257 |
| No log | 0.5656 | 220 | 0.4845 | 0.4490 | 0.4845 | 0.6961 |
| No log | 0.5707 | 222 | 0.5231 | 0.5271 | 0.5231 | 0.7233 |
| No log | 0.5758 | 224 | 0.4822 | 0.4886 | 0.4822 | 0.6944 |
| No log | 0.5810 | 226 | 0.4878 | 0.3970 | 0.4878 | 0.6984 |
| No log | 0.5861 | 228 | 0.4745 | 0.4288 | 0.4745 | 0.6888 |
| No log | 0.5913 | 230 | 0.5477 | 0.5292 | 0.5477 | 0.7401 |
| No log | 0.5964 | 232 | 0.6008 | 0.5223 | 0.6008 | 0.7751 |
| No log | 0.6015 | 234 | 0.5149 | 0.5206 | 0.5149 | 0.7175 |
| No log | 0.6067 | 236 | 0.4841 | 0.4222 | 0.4841 | 0.6958 |
| No log | 0.6118 | 238 | 0.5127 | 0.3362 | 0.5127 | 0.7160 |
| No log | 0.6170 | 240 | 0.4975 | 0.3923 | 0.4975 | 0.7053 |
| No log | 0.6221 | 242 | 0.5268 | 0.5096 | 0.5268 | 0.7258 |
| No log | 0.6272 | 244 | 0.6378 | 0.5017 | 0.6378 | 0.7986 |
| No log | 0.6324 | 246 | 0.5999 | 0.5175 | 0.5999 | 0.7745 |
| No log | 0.6375 | 248 | 0.4988 | 0.5016 | 0.4988 | 0.7063 |
| No log | 0.6427 | 250 | 0.4872 | 0.4214 | 0.4872 | 0.6980 |
| No log | 0.6478 | 252 | 0.5091 | 0.3482 | 0.5091 | 0.7135 |
| No log | 0.6530 | 254 | 0.4968 | 0.3697 | 0.4968 | 0.7049 |
| No log | 0.6581 | 256 | 0.4635 | 0.5082 | 0.4635 | 0.6808 |
| No log | 0.6632 | 258 | 0.5824 | 0.5396 | 0.5824 | 0.7631 |
| No log | 0.6684 | 260 | 0.5973 | 0.5489 | 0.5973 | 0.7729 |
| No log | 0.6735 | 262 | 0.5086 | 0.5418 | 0.5086 | 0.7132 |
| No log | 0.6787 | 264 | 0.4792 | 0.4449 | 0.4792 | 0.6922 |
| No log | 0.6838 | 266 | 0.5579 | 0.3627 | 0.5579 | 0.7469 |
| No log | 0.6889 | 268 | 0.5398 | 0.3697 | 0.5398 | 0.7347 |
| No log | 0.6941 | 270 | 0.4788 | 0.4614 | 0.4788 | 0.6920 |
| No log | 0.6992 | 272 | 0.5757 | 0.5162 | 0.5757 | 0.7587 |
| No log | 0.7044 | 274 | 0.6351 | 0.4778 | 0.6351 | 0.7969 |
| No log | 0.7095 | 276 | 0.5685 | 0.4801 | 0.5685 | 0.7540 |
| No log | 0.7147 | 278 | 0.5609 | 0.3805 | 0.5609 | 0.7489 |
| No log | 0.7198 | 280 | 0.5666 | 0.3615 | 0.5666 | 0.7527 |
| No log | 0.7249 | 282 | 0.5373 | 0.4395 | 0.5373 | 0.7330 |
| No log | 0.7301 | 284 | 0.5215 | 0.4876 | 0.5215 | 0.7221 |
| No log | 0.7352 | 286 | 0.4933 | 0.4565 | 0.4933 | 0.7024 |
| No log | 0.7404 | 288 | 0.5489 | 0.3641 | 0.5489 | 0.7409 |
| No log | 0.7455 | 290 | 0.6123 | 0.3346 | 0.6123 | 0.7825 |
| No log | 0.7506 | 292 | 0.5541 | 0.4180 | 0.5541 | 0.7444 |
| No log | 0.7558 | 294 | 0.4790 | 0.4901 | 0.4790 | 0.6921 |
| No log | 0.7609 | 296 | 0.4686 | 0.5074 | 0.4686 | 0.6846 |
| No log | 0.7661 | 298 | 0.4663 | 0.4929 | 0.4663 | 0.6829 |
| No log | 0.7712 | 300 | 0.4666 | 0.5431 | 0.4666 | 0.6831 |
| No log | 0.7763 | 302 | 0.4872 | 0.5412 | 0.4872 | 0.6980 |
| No log | 0.7815 | 304 | 0.4857 | 0.5417 | 0.4857 | 0.6969 |
| No log | 0.7866 | 306 | 0.4833 | 0.5466 | 0.4833 | 0.6952 |
| No log | 0.7918 | 308 | 0.4925 | 0.5549 | 0.4925 | 0.7018 |
| No log | 0.7969 | 310 | 0.4739 | 0.5404 | 0.4739 | 0.6884 |
| No log | 0.8021 | 312 | 0.4711 | 0.5205 | 0.4711 | 0.6863 |
| No log | 0.8072 | 314 | 0.5007 | 0.5171 | 0.5007 | 0.7076 |
| No log | 0.8123 | 316 | 0.5579 | 0.5316 | 0.5579 | 0.7469 |
| No log | 0.8175 | 318 | 0.5392 | 0.5250 | 0.5392 | 0.7343 |
| No log | 0.8226 | 320 | 0.5354 | 0.5256 | 0.5354 | 0.7317 |
| No log | 0.8278 | 322 | 0.5253 | 0.4336 | 0.5253 | 0.7248 |
| No log | 0.8329 | 324 | 0.5250 | 0.4681 | 0.5250 | 0.7246 |
| No log | 0.8380 | 326 | 0.5320 | 0.5253 | 0.5320 | 0.7294 |
| No log | 0.8432 | 328 | 0.4825 | 0.5053 | 0.4825 | 0.6946 |
| No log | 0.8483 | 330 | 0.4667 | 0.4666 | 0.4667 | 0.6832 |
| No log | 0.8535 | 332 | 0.4557 | 0.5208 | 0.4557 | 0.6750 |
| No log | 0.8586 | 334 | 0.4649 | 0.5286 | 0.4649 | 0.6818 |
| No log | 0.8638 | 336 | 0.4720 | 0.5408 | 0.4720 | 0.6870 |
| No log | 0.8689 | 338 | 0.4720 | 0.5090 | 0.4720 | 0.6870 |
| No log | 0.8740 | 340 | 0.4633 | 0.5394 | 0.4633 | 0.6807 |
| No log | 0.8792 | 342 | 0.4862 | 0.5372 | 0.4862 | 0.6973 |
| No log | 0.8843 | 344 | 0.5721 | 0.5510 | 0.5721 | 0.7564 |
| No log | 0.8895 | 346 | 0.7939 | 0.4588 | 0.7939 | 0.8910 |
| No log | 0.8946 | 348 | 0.8576 | 0.3951 | 0.8576 | 0.9260 |
| No log | 0.8997 | 350 | 0.6966 | 0.4815 | 0.6966 | 0.8346 |
| No log | 0.9049 | 352 | 0.5705 | 0.5189 | 0.5705 | 0.7553 |
| No log | 0.9100 | 354 | 0.5220 | 0.5013 | 0.5220 | 0.7225 |
| No log | 0.9152 | 356 | 0.5820 | 0.5494 | 0.5820 | 0.7629 |
| No log | 0.9203 | 358 | 0.7439 | 0.5036 | 0.7439 | 0.8625 |
| No log | 0.9254 | 360 | 0.6732 | 0.5195 | 0.6732 | 0.8205 |
| No log | 0.9306 | 362 | 0.4886 | 0.5138 | 0.4886 | 0.6990 |
| No log | 0.9357 | 364 | 0.4985 | 0.4153 | 0.4985 | 0.7061 |
| No log | 0.9409 | 366 | 0.5227 | 0.3879 | 0.5227 | 0.7230 |
| No log | 0.9460 | 368 | 0.4700 | 0.4772 | 0.4700 | 0.6856 |
| No log | 0.9512 | 370 | 0.5071 | 0.5468 | 0.5071 | 0.7121 |
| No log | 0.9563 | 372 | 0.5819 | 0.5450 | 0.5819 | 0.7628 |
| No log | 0.9614 | 374 | 0.5248 | 0.5394 | 0.5248 | 0.7244 |
| No log | 0.9666 | 376 | 0.4763 | 0.4821 | 0.4763 | 0.6901 |
| No log | 0.9717 | 378 | 0.4905 | 0.4417 | 0.4905 | 0.7004 |
| No log | 0.9769 | 380 | 0.4829 | 0.4738 | 0.4829 | 0.6949 |
| No log | 0.9820 | 382 | 0.5657 | 0.5193 | 0.5657 | 0.7521 |
| No log | 0.9871 | 384 | 0.7036 | 0.5190 | 0.7036 | 0.8388 |
| No log | 0.9923 | 386 | 0.6313 | 0.5412 | 0.6313 | 0.7945 |
| No log | 0.9974 | 388 | 0.4861 | 0.4993 | 0.4861 | 0.6972 |
| No log | 1.0026 | 390 | 0.4721 | 0.4625 | 0.4721 | 0.6871 |
| No log | 1.0077 | 392 | 0.4816 | 0.4456 | 0.4816 | 0.6940 |
| No log | 1.0129 | 394 | 0.4607 | 0.4838 | 0.4607 | 0.6787 |
| No log | 1.0180 | 396 | 0.4668 | 0.5060 | 0.4668 | 0.6832 |
| No log | 1.0231 | 398 | 0.4631 | 0.5200 | 0.4631 | 0.6805 |
| No log | 1.0283 | 400 | 0.4655 | 0.4533 | 0.4655 | 0.6823 |
| No log | 1.0334 | 402 | 0.5081 | 0.4148 | 0.5081 | 0.7128 |
| No log | 1.0386 | 404 | 0.4766 | 0.4245 | 0.4766 | 0.6904 |
| No log | 1.0437 | 406 | 0.4818 | 0.4912 | 0.4818 | 0.6941 |
| No log | 1.0488 | 408 | 0.5238 | 0.5213 | 0.5238 | 0.7238 |
| No log | 1.0540 | 410 | 0.4926 | 0.4612 | 0.4926 | 0.7019 |
| No log | 1.0591 | 412 | 0.5061 | 0.4156 | 0.5061 | 0.7114 |
| No log | 1.0643 | 414 | 0.5041 | 0.4209 | 0.5041 | 0.7100 |
| No log | 1.0694 | 416 | 0.5551 | 0.4780 | 0.5551 | 0.7451 |
| No log | 1.0746 | 418 | 0.5774 | 0.5102 | 0.5774 | 0.7599 |
| No log | 1.0797 | 420 | 0.5050 | 0.5034 | 0.5050 | 0.7106 |
| No log | 1.0848 | 422 | 0.4807 | 0.5015 | 0.4807 | 0.6933 |
| No log | 1.0900 | 424 | 0.4769 | 0.5189 | 0.4769 | 0.6906 |
| No log | 1.0951 | 426 | 0.4694 | 0.5283 | 0.4694 | 0.6851 |
| No log | 1.1003 | 428 | 0.4423 | 0.4854 | 0.4423 | 0.6650 |
| No log | 1.1054 | 430 | 0.4369 | 0.5068 | 0.4369 | 0.6610 |
| No log | 1.1105 | 432 | 0.4454 | 0.5455 | 0.4454 | 0.6674 |
| No log | 1.1157 | 434 | 0.4437 | 0.5512 | 0.4437 | 0.6661 |
| No log | 1.1208 | 436 | 0.4562 | 0.4929 | 0.4562 | 0.6754 |
| No log | 1.1260 | 438 | 0.4602 | 0.4870 | 0.4602 | 0.6783 |
| No log | 1.1311 | 440 | 0.4457 | 0.5258 | 0.4457 | 0.6676 |
| No log | 1.1362 | 442 | 0.4608 | 0.5101 | 0.4608 | 0.6788 |
| No log | 1.1414 | 444 | 0.4760 | 0.5264 | 0.4760 | 0.6899 |
| No log | 1.1465 | 446 | 0.5298 | 0.5349 | 0.5298 | 0.7279 |
| No log | 1.1517 | 448 | 0.5108 | 0.5336 | 0.5108 | 0.7147 |
| No log | 1.1568 | 450 | 0.5328 | 0.5322 | 0.5328 | 0.7299 |
| No log | 1.1620 | 452 | 0.5035 | 0.5248 | 0.5035 | 0.7096 |
| No log | 1.1671 | 454 | 0.5211 | 0.5226 | 0.5211 | 0.7219 |
| No log | 1.1722 | 456 | 0.4861 | 0.5296 | 0.4861 | 0.6972 |
| No log | 1.1774 | 458 | 0.4699 | 0.4930 | 0.4699 | 0.6855 |
| No log | 1.1825 | 460 | 0.4989 | 0.5226 | 0.4989 | 0.7063 |
| No log | 1.1877 | 462 | 0.6440 | 0.5366 | 0.6440 | 0.8025 |
| No log | 1.1928 | 464 | 0.7441 | 0.5185 | 0.7441 | 0.8626 |
| No log | 1.1979 | 466 | 0.6323 | 0.5209 | 0.6323 | 0.7952 |
| No log | 1.2031 | 468 | 0.4717 | 0.5118 | 0.4717 | 0.6868 |
| No log | 1.2082 | 470 | 0.4499 | 0.4895 | 0.4499 | 0.6708 |
| No log | 1.2134 | 472 | 0.4587 | 0.5022 | 0.4587 | 0.6773 |
| No log | 1.2185 | 474 | 0.5473 | 0.5565 | 0.5473 | 0.7398 |
| No log | 1.2237 | 476 | 0.5294 | 0.5544 | 0.5294 | 0.7276 |
| No log | 1.2288 | 478 | 0.4768 | 0.5068 | 0.4768 | 0.6905 |
| No log | 1.2339 | 480 | 0.4621 | 0.5117 | 0.4621 | 0.6798 |
| No log | 1.2391 | 482 | 0.4542 | 0.5083 | 0.4542 | 0.6739 |
| No log | 1.2442 | 484 | 0.4625 | 0.5366 | 0.4625 | 0.6801 |
| No log | 1.2494 | 486 | 0.4630 | 0.5530 | 0.4630 | 0.6804 |
| No log | 1.2545 | 488 | 0.4324 | 0.5327 | 0.4324 | 0.6576 |
| No log | 1.2596 | 490 | 0.4783 | 0.4953 | 0.4783 | 0.6916 |
| No log | 1.2648 | 492 | 0.4386 | 0.5386 | 0.4386 | 0.6623 |
| No log | 1.2699 | 494 | 0.5001 | 0.5708 | 0.5001 | 0.7072 |
| No log | 1.2751 | 496 | 0.4724 | 0.5244 | 0.4724 | 0.6873 |
| No log | 1.2802 | 498 | 0.4831 | 0.5095 | 0.4831 | 0.6950 |
| 0.5324 | 1.2853 | 500 | 0.5707 | 0.5382 | 0.5707 | 0.7554 |
| 0.5324 | 1.2905 | 502 | 0.6423 | 0.5507 | 0.6423 | 0.8014 |
| 0.5324 | 1.2956 | 504 | 0.5986 | 0.5476 | 0.5986 | 0.7737 |
| 0.5324 | 1.3008 | 506 | 0.4654 | 0.5226 | 0.4654 | 0.6822 |
| 0.5324 | 1.3059 | 508 | 0.5024 | 0.3636 | 0.5024 | 0.7088 |
| 0.5324 | 1.3111 | 510 | 0.5388 | 0.3602 | 0.5388 | 0.7340 |
| 0.5324 | 1.3162 | 512 | 0.4840 | 0.4043 | 0.4840 | 0.6957 |
| 0.5324 | 1.3213 | 514 | 0.4564 | 0.5184 | 0.4564 | 0.6756 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
omarelsayeed/t | omarelsayeed | 2024-11-06T16:51:39Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"deformable_detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-11-06T16:51:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glif-loradex-trainer/Keskitariv_captain_cook_ai_3k | glif-loradex-trainer | 2024-11-06T16:51:17Z | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-06T16:50:46Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730911732531__000003000_0.jpg
text: captain cook the pirate shiba inu looking at the horizon in his telescope,
on the deck of his pirate frigate realistic artwork of Captain Cook the pirate
Shiba Inu
- output:
url: samples/1730911756085__000003000_1.jpg
text: captain cook the pirate shiba inu fighting in a duel with a giant scary
sea monster realistic artwork of Captain Cook the pirate Shiba Inu
- output:
url: samples/1730911779712__000003000_2.jpg
text: Captain cook the pirate shiba inu hugging with a cute squirrel realistic
artwork of Captain Cook the pirate Shiba Inu
- output:
url: samples/1730911803257__000003000_3.jpg
text: Captain cook the pirate shiba inu dancing and partying on the deck of his
boat, with his pirate shiba inus crew realistic artwork of Captain Cook the
pirate Shiba Inu
- output:
url: samples/1730911826907__000003000_4.jpg
text: Captain cook the pirate shiba inu riding on a doplhin, surrounded by multiple
other jumping out of the sea dolphins realistic artwork of Captain Cook the
pirate Shiba Inu
base_model: black-forest-labs/FLUX.1-dev
trigger: realistic artwork of Captain Cook the pirate Shiba Inu
instance_prompt: realistic artwork of Captain Cook the pirate Shiba Inu
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# captain_cook_ai_3k
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Keskitariv`.
<Gallery />
## Trigger words
You should use `realistic artwork of Captain Cook the pirate Shiba Inu` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Keskitariv_captain_cook_ai_3k/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
maennyn/bert-finetuned-ner | maennyn | 2024-11-06T16:50:36Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-06T16:20:51Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9310572323932047
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9404414827155352
- name: Accuracy
type: accuracy
value: 0.9860334373344322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Precision: 0.9311
- Recall: 0.9500
- F1: 0.9404
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0749 | 1.0 | 1756 | 0.0616 | 0.9094 | 0.9364 | 0.9227 | 0.9831 |
| 0.0357 | 2.0 | 3512 | 0.0658 | 0.9291 | 0.9438 | 0.9364 | 0.9848 |
| 0.0206 | 3.0 | 5268 | 0.0621 | 0.9311 | 0.9500 | 0.9404 | 0.9860 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
CPSC532/2024NOV06_arxiv_qa_data_cleaned_qwen | CPSC532 | 2024-11-06T16:49:46Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T16:45:35Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** CPSC532
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mav23/granite-3.0-8b-instruct-GGUF | mav23 | 2024-11-06T16:49:02Z | 140 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.0",
"text-generation",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.0-8b-base",
"base_model:quantized:ibm-granite/granite-3.0-8b-base",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-11-06T15:48:42Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.0
model-index:
- name: granite-3.0-2b-instruct
results:
- task:
type: text-generation
dataset:
type: instruction-following
name: IFEval
metrics:
- name: pass@1
type: pass@1
value: 52.27
veriefied: false
- task:
type: text-generation
dataset:
type: instruction-following
name: MT-Bench
metrics:
- name: pass@1
type: pass@1
value: 8.22
veriefied: false
- task:
type: text-generation
dataset:
type: human-exams
name: AGI-Eval
metrics:
- name: pass@1
type: pass@1
value: 40.52
veriefied: false
- task:
type: text-generation
dataset:
type: human-exams
name: MMLU
metrics:
- name: pass@1
type: pass@1
value: 65.82
veriefied: false
- task:
type: text-generation
dataset:
type: human-exams
name: MMLU-Pro
metrics:
- name: pass@1
type: pass@1
value: 34.45
veriefied: false
- task:
type: text-generation
dataset:
type: commonsense
name: OBQA
metrics:
- name: pass@1
type: pass@1
value: 46.6
veriefied: false
- task:
type: text-generation
dataset:
type: commonsense
name: SIQA
metrics:
- name: pass@1
type: pass@1
value: 71.21
veriefied: false
- task:
type: text-generation
dataset:
type: commonsense
name: Hellaswag
metrics:
- name: pass@1
type: pass@1
value: 82.61
veriefied: false
- task:
type: text-generation
dataset:
type: commonsense
name: WinoGrande
metrics:
- name: pass@1
type: pass@1
value: 77.51
veriefied: false
- task:
type: text-generation
dataset:
type: commonsense
name: TruthfulQA
metrics:
- name: pass@1
type: pass@1
value: 60.32
veriefied: false
- task:
type: text-generation
dataset:
type: reading-comprehension
name: BoolQ
metrics:
- name: pass@1
type: pass@1
value: 88.65
veriefied: false
- task:
type: text-generation
dataset:
type: reading-comprehension
name: SQuAD 2.0
metrics:
- name: pass@1
type: pass@1
value: 21.58
veriefied: false
- task:
type: text-generation
dataset:
type: reasoning
name: ARC-C
metrics:
- name: pass@1
type: pass@1
value: 64.16
veriefied: false
- task:
type: text-generation
dataset:
type: reasoning
name: GPQA
metrics:
- name: pass@1
type: pass@1
value: 33.81
veriefied: false
- task:
type: text-generation
dataset:
type: reasoning
name: BBH
metrics:
- name: pass@1
type: pass@1
value: 51.55
veriefied: false
- task:
type: text-generation
dataset:
type: code
name: HumanEvalSynthesis
metrics:
- name: pass@1
type: pass@1
value: 64.63
veriefied: false
- task:
type: text-generation
dataset:
type: code
name: HumanEvalExplain
metrics:
- name: pass@1
type: pass@1
value: 57.16
veriefied: false
- task:
type: text-generation
dataset:
type: code
name: HumanEvalFix
metrics:
- name: pass@1
type: pass@1
value: 65.85
veriefied: false
- task:
type: text-generation
dataset:
type: code
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 49.6
veriefied: false
- task:
type: text-generation
dataset:
type: math
name: GSM8K
metrics:
- name: pass@1
type: pass@1
value: 68.99
veriefied: false
- task:
type: text-generation
dataset:
type: math
name: MATH
metrics:
- name: pass@1
type: pass@1
value: 30.94
veriefied: false
- task:
type: text-generation
dataset:
type: multilingual
name: PAWS-X (7 langs)
metrics:
- name: pass@1
type: pass@1
value: 64.94
veriefied: false
- task:
type: text-generation
dataset:
type: multilingual
name: MGSM (6 langs)
metrics:
- name: pass@1
type: pass@1
value: 48.2
veriefied: false
base_model:
- ibm-granite/granite-3.0-8b-base
---
<!--  -->
<!--  -->
# Granite-3.0-8B-Instruct
**Model Summary:**
Granite-3.0-8B-Instruct is a 8B parameter model finetuned from *Granite-3.0-8B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
- **Developers:** Granite Team, IBM
- **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf)
- **Release Date**: October 21st, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.0 models for languages beyond these 12 languages.
**Intended use:**
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
*Capabilities*
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
**Generation:**
This is a simple example of how to use Granite-3.0-8B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "auto"
model_path = "ibm-granite/granite-3.0-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
```
**Model Architecture:**
Granite-3.0-8B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.
| Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
| :-------- | :--------| :-------- | :------| :------|
| Embedding size | 2048 | **4096** | 1024 | 1536 |
| Number of layers | 40 | **40** | 24 | 32 |
| Attention head size | 64 | **128** | 64 | 64 |
| Number of attention heads | 32 | **32** | 16 | 24 |
| Number of KV heads | 8 | **8** | 8 | 8 |
| MLP hidden size | 8192 | **12800** | 512 | 512 |
| MLP activation | SwiGLU | **SwiGLU** | SwiGLU | SwiGLU |
| Number of Experts | — | **—** | 32 | 40 |
| MoE TopK | — | **—** | 8 | 8 |
| Initialization std | 0.1 | **0.1** | 0.1 | 0.1 |
| Sequence Length | 4096 | **4096** | 4096 | 4096 |
| Position Embedding | RoPE | **RoPE** | RoPE | RoPE |
| # Parameters | 2.5B | **8.1B** | 1.3B | 3.3B |
| # Active Parameters | 2.5B | **8.1B** | 400M | 800M |
| # Training tokens | 12T | **12T** | 10T | 10T |
**Training Data:**
Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf) and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf).
**Infrastructure:**
We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs while minimizing environmental impact by utilizing 100% renewable energy sources.
**Ethical Considerations and Limitations:**
Granite 3.0 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` --> |
JBJoyce/wavlm-large-finetuned-SER | JBJoyce | 2024-11-06T16:45:27Z | 5 | 0 | null | [
"safetensors",
"wavlm",
"audio-classification",
"en",
"dataset:JBJoyce/SER_combined",
"base_model:microsoft/wavlm-large",
"base_model:finetune:microsoft/wavlm-large",
"region:us"
] | audio-classification | 2024-11-02T16:15:49Z | ---
datasets:
- JBJoyce/SER_combined
language:
- en
metrics:
- accuracy
base_model:
- microsoft/wavlm-large
pipeline_tag: audio-classification
--- |
AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-dpo-3epochs | AlekseyKorshuk | 2024-11-06T16:43:49Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-rl-trl",
"arxiv:2305.18290",
"base_model:AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-sft-qwen-7b-sft-3epochs",
"base_model:finetune:AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-sft-qwen-7b-sft-3epochs",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T14:24:02Z | ---
base_model: AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-sft-qwen-7b-sft-3epochs
datasets: AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-rl-trl
library_name: transformers
model_name: ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-dpo-3epochs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-dpo-3epochs
This model is a fine-tuned version of [AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-sft-qwen-7b-sft-3epochs](https://huggingface.co/AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-sft-qwen-7b-sft-3epochs) on the [AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-rl-trl](https://huggingface.co/datasets/AlekseyKorshuk/ai-detection-gutenberg-human-choosed-formatted-ai-rl-trl) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-dpo-3epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aleksey-korshuk/huggingface/runs/xivzcosl)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.4.1+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Xu-Ouyang/pythia-6.9b-deduped-int8-step8-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-06T16:41:21Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-06T16:29:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SenTW/Llama_241107_01_FT_RAG03 | SenTW | 2024-11-06T16:37:31Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:59:18Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** SenTW
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF | featherless-ai-quants | 2024-11-06T16:33:20Z | 5 | 0 | null | [
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-06T15:04:22Z | ---
base_model: kaist-ai-mistral-orpo-capybara-7k
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# kaist-ai-mistral-orpo-capybara-7k GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [kaist-ai-mistral-orpo-capybara-7k-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [kaist-ai-mistral-orpo-capybara-7k-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [kaist-ai-mistral-orpo-capybara-7k-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [kaist-ai-mistral-orpo-capybara-7k-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [kaist-ai-mistral-orpo-capybara-7k-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [kaist-ai-mistral-orpo-capybara-7k-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [kaist-ai-mistral-orpo-capybara-7k-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [kaist-ai-mistral-orpo-capybara-7k-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [kaist-ai-mistral-orpo-capybara-7k-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [kaist-ai-mistral-orpo-capybara-7k-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [kaist-ai-mistral-orpo-capybara-7k-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/kaist-ai-mistral-orpo-capybara-7k-GGUF/blob/main/kaist-ai-mistral-orpo-capybara-7k-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
CarlosRiverMe/lora-alebrijeros-style | CarlosRiverMe | 2024-11-06T16:28:51Z | 19 | 0 | diffusers | [
"diffusers",
"sd3",
"sd3-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"standard",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2024-11-06T15:17:42Z | ---
license: other
base_model: "stabilityai/stable-diffusion-3.5-large"
tags:
- sd3
- sd3-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- standard
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'sweatshirt painted in the alebrijeros style'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# lora-alebrijeros-style
This is a standard PEFT LoRA derived from [stabilityai/stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large).
The main validation prompt used during training was:
```
sweatshirt painted in the alebrijeros style
```
## Validation settings
- CFG: `5.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `None`
- Seed: `42`
- Resolution: `512x512`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 4
- Training steps: 2600
- Learning rate: 5e-05
- Max grad norm: 0.01
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: Pure BF16
- Quantised: Yes: int8-quanto
- Xformers: Not used
- LoRA Rank: 64
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### alebrijeros-style-dataset-512
- Repeats: 5
- Total number of images: 25
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### alebrijeros-style-dataset-1024
- Repeats: 5
- Total number of images: 25
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### alebrijeros-style-dataset-512-crop
- Repeats: 5
- Total number of images: 25
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### alebrijeros-style-dataset-1024-crop
- Repeats: 5
- Total number of images: 25
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'stabilityai/stable-diffusion-3.5-large'
adapter_id = 'CarlosRiverMe/lora-alebrijeros-style'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "sweatshirt painted in the alebrijeros style"
negative_prompt = 'blurry, cropped, ugly'
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=512,
height=512,
guidance_scale=5.0,
).images[0]
image.save("output.png", format="PNG")
```
|
OPTML-Group/TOFU-origin-Llama-2-7b-chat | OPTML-Group | 2024-11-06T16:26:05Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unlearn",
"machine-unlearning",
"llm-unlearning",
"data-privacy",
"large-language-models",
"trustworthy-ai",
"trustworthy-machine-learning",
"language-model",
"en",
"dataset:locuslab/TOFU",
"arxiv:2410.07163",
"arxiv:2401.06121",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-24T20:19:51Z | ---
license: mit
datasets:
- locuslab/TOFU
language:
- en
base_model:
- NousResearch/Llama-2-7b-chat-hf
pipeline_tag: text-generation
library_name: transformers
tags:
- unlearn
- machine-unlearning
- llm-unlearning
- data-privacy
- large-language-models
- trustworthy-ai
- trustworthy-machine-learning
- language-model
---
# Origin Model on Task "TOFU"
## Model Details
- **Training**:
- **Task**: [🤗datasets/locuslab/TOFU](https://huggingface.co/datasets/locuslab/TOFU)
- **Method**: Fine tune
- **Base Model**: [🤗NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf)
- **Code Base**: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple)
- **Research Paper**:
- ["Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"](https://arxiv.org/abs/2410.07163)
- ["TOFU: A Task of Fictitious Unlearning for LLMs"](https://arxiv.org/abs/2401.06121)
## Loading the Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/TOFU-origin-Llama-2-7b-chat", use_flash_attention_2=True, torch_dtype=torch.bfloat16, trust_remote_code=True)
```
## Citation
If you use this model in your research, please cite:
```
@article{fan2024simplicity,
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
journal={arXiv preprint arXiv:2410.07163},
year={2024}
}
```
## Reporting Issues
Reporting issues with the model: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple) |
OPTML-Group/SimNPO-MUSE-News-Llama-2-7b | OPTML-Group | 2024-11-06T16:24:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unlearn",
"machine-unlearning",
"llm-unlearning",
"data-privacy",
"large-language-models",
"trustworthy-ai",
"trustworthy-machine-learning",
"language-model",
"en",
"dataset:muse-bench/MUSE-News",
"arxiv:2410.07163",
"base_model:muse-bench/MUSE-news_target",
"base_model:finetune:muse-bench/MUSE-news_target",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-24T20:14:13Z | ---
license: mit
datasets:
- muse-bench/MUSE-News
language:
- en
base_model:
- muse-bench/MUSE-news_target
pipeline_tag: text-generation
library_name: transformers
tags:
- unlearn
- machine-unlearning
- llm-unlearning
- data-privacy
- large-language-models
- trustworthy-ai
- trustworthy-machine-learning
- language-model
---
# SimNPO-Unlearned Model on Task "MUSE - News"
## Model Details
- **Unlearning**:
- **Task**: [🤗datasets/muse-bench/MUSE-News](https://huggingface.co/datasets/muse-bench/MUSE-News)
- **Method**: [SimNPO](https://arxiv.org/abs/2410.07163)
- **Origin Model**: [🤗muse-bench/MUSE-news_target](https://huggingface.co/muse-bench/MUSE-news_target)
- **Code Base**: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple)
- **Research Paper**: ["Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"](https://arxiv.org/abs/2410.07163)
## Unlearning Algorithm
This model uses the `SimNPO` unlearning algorithm with the following optimization objective:
$$\ell_{SimNPO}(\mathbf{\theta}) = \mathbb{E}_{(x, y) \in \mathcal{D}_f}\left[-\frac{2}{\beta}\log\sigma\left(-\frac{\beta}{|y|}\log\pi_{\mathbf{\theta}}(y|x) - \gamma\right)\right] + \lambda \mathbb{E}_{(x, y) \in \mathcal{D}_r}[-\log\pi_{\mathbf{\theta}} (y|x)]$$
Unlearning hyper-parameters:
- Learning Rate: `1e-5`
- beta: `0.7`
- lambda: `1.0`
- gamma: `3.0`
## Loading the Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-MUSE-News-llama-2-7b", torch_dtype=torch.bfloat16, device_map='auto')
```
## Evaluation Results
||VerbMem Df|KnowMem Df|PrivLeak|KnowMem Dr|
|---|---|---|---|---|
|Origin|58.29|62.93|-98.71|54.31|
|Retrain|20.75|33.32|0.00|53.79|
|NPO|0.00|56.93|56.93|108.91|
|**SimNPO**|12.90|47.09|11.90|40.31|
## Citation
If you use this model in your research, please cite:
```
@article{fan2024simplicity,
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
journal={arXiv preprint arXiv:2410.07163},
year={2024}
}
```
## Reporting Issues
Reporting issues with the model: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple) |
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4 | MayBashendy | 2024-11-06T16:17:51Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:44:54Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4950
- Qwk: 0.6411
- Mse: 0.4950
- Rmse: 0.7035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0063 | 2 | 10.1861 | 0.0 | 10.1861 | 3.1916 |
| No log | 0.0126 | 4 | 8.5953 | -0.0005 | 8.5953 | 2.9318 |
| No log | 0.0189 | 6 | 6.9159 | 0.0051 | 6.9159 | 2.6298 |
| No log | 0.0252 | 8 | 5.5130 | 0.0037 | 5.5130 | 2.3480 |
| No log | 0.0315 | 10 | 4.3816 | 0.0018 | 4.3816 | 2.0932 |
| No log | 0.0379 | 12 | 3.5082 | 0.0492 | 3.5082 | 1.8730 |
| No log | 0.0442 | 14 | 2.7686 | 0.0128 | 2.7686 | 1.6639 |
| No log | 0.0505 | 16 | 2.1322 | 0.0118 | 2.1322 | 1.4602 |
| No log | 0.0568 | 18 | 1.6261 | 0.0079 | 1.6261 | 1.2752 |
| No log | 0.0631 | 20 | 1.2562 | 0.1722 | 1.2562 | 1.1208 |
| No log | 0.0694 | 22 | 1.0333 | 0.0420 | 1.0333 | 1.0165 |
| No log | 0.0757 | 24 | 0.8915 | 0.0316 | 0.8915 | 0.9442 |
| No log | 0.0820 | 26 | 0.8074 | 0.0316 | 0.8074 | 0.8986 |
| No log | 0.0883 | 28 | 0.7660 | 0.0316 | 0.7660 | 0.8752 |
| No log | 0.0946 | 30 | 0.7689 | 0.0542 | 0.7689 | 0.8769 |
| No log | 0.1009 | 32 | 0.9386 | 0.0937 | 0.9386 | 0.9688 |
| No log | 0.1073 | 34 | 0.8347 | 0.0771 | 0.8347 | 0.9136 |
| No log | 0.1136 | 36 | 0.8293 | 0.4385 | 0.8293 | 0.9106 |
| No log | 0.1199 | 38 | 0.8916 | 0.3628 | 0.8916 | 0.9442 |
| No log | 0.1262 | 40 | 0.8068 | 0.0212 | 0.8068 | 0.8982 |
| No log | 0.1325 | 42 | 0.8411 | 0.0344 | 0.8411 | 0.9171 |
| No log | 0.1388 | 44 | 0.8499 | 0.0344 | 0.8499 | 0.9219 |
| No log | 0.1451 | 46 | 0.8047 | 0.0107 | 0.8047 | 0.8970 |
| No log | 0.1514 | 48 | 0.7906 | 0.0107 | 0.7906 | 0.8892 |
| No log | 0.1577 | 50 | 0.7428 | 0.0317 | 0.7428 | 0.8619 |
| No log | 0.1640 | 52 | 0.7615 | 0.0511 | 0.7615 | 0.8726 |
| No log | 0.1703 | 54 | 0.7432 | 0.0792 | 0.7432 | 0.8621 |
| No log | 0.1767 | 56 | 0.6753 | 0.0610 | 0.6753 | 0.8218 |
| No log | 0.1830 | 58 | 0.6924 | 0.0317 | 0.6924 | 0.8321 |
| No log | 0.1893 | 60 | 0.7336 | 0.0730 | 0.7336 | 0.8565 |
| No log | 0.1956 | 62 | 0.7216 | 0.0213 | 0.7216 | 0.8495 |
| No log | 0.2019 | 64 | 0.6734 | 0.0826 | 0.6734 | 0.8206 |
| No log | 0.2082 | 66 | 0.8115 | 0.1971 | 0.8115 | 0.9008 |
| No log | 0.2145 | 68 | 1.0608 | 0.2342 | 1.0608 | 1.0300 |
| No log | 0.2208 | 70 | 0.8848 | 0.2293 | 0.8848 | 0.9406 |
| No log | 0.2271 | 72 | 0.6445 | 0.1331 | 0.6445 | 0.8028 |
| No log | 0.2334 | 74 | 0.6672 | 0.0803 | 0.6672 | 0.8168 |
| No log | 0.2397 | 76 | 0.6616 | 0.0754 | 0.6616 | 0.8134 |
| No log | 0.2461 | 78 | 0.6149 | 0.1067 | 0.6149 | 0.7842 |
| No log | 0.2524 | 80 | 0.6896 | 0.1973 | 0.6896 | 0.8304 |
| No log | 0.2587 | 82 | 0.7505 | 0.2167 | 0.7505 | 0.8663 |
| No log | 0.2650 | 84 | 0.6389 | 0.1883 | 0.6389 | 0.7993 |
| No log | 0.2713 | 86 | 0.6107 | 0.2957 | 0.6107 | 0.7815 |
| No log | 0.2776 | 88 | 0.6234 | 0.3088 | 0.6234 | 0.7895 |
| No log | 0.2839 | 90 | 0.5901 | 0.2657 | 0.5901 | 0.7681 |
| No log | 0.2902 | 92 | 0.6248 | 0.1786 | 0.6248 | 0.7905 |
| No log | 0.2965 | 94 | 0.6419 | 0.2214 | 0.6419 | 0.8012 |
| No log | 0.3028 | 96 | 0.5860 | 0.2699 | 0.5860 | 0.7655 |
| No log | 0.3091 | 98 | 0.5766 | 0.2956 | 0.5766 | 0.7593 |
| No log | 0.3155 | 100 | 0.5547 | 0.3623 | 0.5547 | 0.7448 |
| No log | 0.3218 | 102 | 0.5514 | 0.4222 | 0.5514 | 0.7426 |
| No log | 0.3281 | 104 | 0.5460 | 0.4061 | 0.5460 | 0.7389 |
| No log | 0.3344 | 106 | 0.5756 | 0.3134 | 0.5756 | 0.7587 |
| No log | 0.3407 | 108 | 0.6144 | 0.3095 | 0.6144 | 0.7838 |
| No log | 0.3470 | 110 | 0.5301 | 0.4421 | 0.5301 | 0.7280 |
| No log | 0.3533 | 112 | 0.5429 | 0.4684 | 0.5429 | 0.7368 |
| No log | 0.3596 | 114 | 0.5177 | 0.4759 | 0.5177 | 0.7195 |
| No log | 0.3659 | 116 | 0.5241 | 0.4151 | 0.5241 | 0.7240 |
| No log | 0.3722 | 118 | 0.5069 | 0.4161 | 0.5069 | 0.7120 |
| No log | 0.3785 | 120 | 0.5293 | 0.4872 | 0.5293 | 0.7275 |
| No log | 0.3849 | 122 | 0.5688 | 0.4517 | 0.5688 | 0.7542 |
| No log | 0.3912 | 124 | 0.5780 | 0.2445 | 0.5780 | 0.7603 |
| No log | 0.3975 | 126 | 0.5334 | 0.4100 | 0.5334 | 0.7304 |
| No log | 0.4038 | 128 | 0.5552 | 0.5686 | 0.5552 | 0.7451 |
| No log | 0.4101 | 130 | 0.5369 | 0.5723 | 0.5369 | 0.7327 |
| No log | 0.4164 | 132 | 0.5145 | 0.3755 | 0.5145 | 0.7173 |
| No log | 0.4227 | 134 | 0.5181 | 0.4368 | 0.5181 | 0.7198 |
| No log | 0.4290 | 136 | 0.5175 | 0.4105 | 0.5175 | 0.7194 |
| No log | 0.4353 | 138 | 0.5481 | 0.5205 | 0.5481 | 0.7403 |
| No log | 0.4416 | 140 | 0.5561 | 0.4941 | 0.5561 | 0.7457 |
| No log | 0.4479 | 142 | 0.5308 | 0.5019 | 0.5308 | 0.7286 |
| No log | 0.4543 | 144 | 0.5421 | 0.4929 | 0.5421 | 0.7363 |
| No log | 0.4606 | 146 | 0.5182 | 0.4383 | 0.5182 | 0.7198 |
| No log | 0.4669 | 148 | 0.5113 | 0.4444 | 0.5113 | 0.7151 |
| No log | 0.4732 | 150 | 0.5292 | 0.3937 | 0.5292 | 0.7275 |
| No log | 0.4795 | 152 | 0.5153 | 0.4278 | 0.5153 | 0.7179 |
| No log | 0.4858 | 154 | 0.4959 | 0.4610 | 0.4959 | 0.7042 |
| No log | 0.4921 | 156 | 0.4822 | 0.4742 | 0.4822 | 0.6944 |
| No log | 0.4984 | 158 | 0.5207 | 0.5700 | 0.5207 | 0.7216 |
| No log | 0.5047 | 160 | 0.6361 | 0.5602 | 0.6361 | 0.7976 |
| No log | 0.5110 | 162 | 0.5405 | 0.5354 | 0.5405 | 0.7352 |
| No log | 0.5174 | 164 | 0.5536 | 0.5347 | 0.5536 | 0.7440 |
| No log | 0.5237 | 166 | 0.5308 | 0.5142 | 0.5308 | 0.7285 |
| No log | 0.5300 | 168 | 0.5827 | 0.5080 | 0.5827 | 0.7634 |
| No log | 0.5363 | 170 | 0.6033 | 0.5139 | 0.6033 | 0.7767 |
| No log | 0.5426 | 172 | 0.7514 | 0.5038 | 0.7514 | 0.8669 |
| No log | 0.5489 | 174 | 0.7327 | 0.5197 | 0.7327 | 0.8560 |
| No log | 0.5552 | 176 | 0.5563 | 0.5225 | 0.5563 | 0.7459 |
| No log | 0.5615 | 178 | 0.5157 | 0.4842 | 0.5157 | 0.7181 |
| No log | 0.5678 | 180 | 0.5430 | 0.5432 | 0.5430 | 0.7369 |
| No log | 0.5741 | 182 | 0.5386 | 0.5786 | 0.5386 | 0.7339 |
| No log | 0.5804 | 184 | 0.4900 | 0.5768 | 0.4900 | 0.7000 |
| No log | 0.5868 | 186 | 0.5030 | 0.5908 | 0.5030 | 0.7092 |
| No log | 0.5931 | 188 | 0.4526 | 0.5804 | 0.4526 | 0.6728 |
| No log | 0.5994 | 190 | 0.5105 | 0.4823 | 0.5105 | 0.7145 |
| No log | 0.6057 | 192 | 0.5870 | 0.4220 | 0.5870 | 0.7662 |
| No log | 0.6120 | 194 | 0.5511 | 0.4319 | 0.5511 | 0.7423 |
| No log | 0.6183 | 196 | 0.4500 | 0.5472 | 0.4500 | 0.6708 |
| No log | 0.6246 | 198 | 0.4526 | 0.5562 | 0.4526 | 0.6728 |
| No log | 0.6309 | 200 | 0.5135 | 0.5754 | 0.5135 | 0.7166 |
| No log | 0.6372 | 202 | 0.6373 | 0.5419 | 0.6373 | 0.7983 |
| No log | 0.6435 | 204 | 0.5640 | 0.5393 | 0.5640 | 0.7510 |
| No log | 0.6498 | 206 | 0.5375 | 0.5351 | 0.5375 | 0.7332 |
| No log | 0.6562 | 208 | 0.5511 | 0.5560 | 0.5511 | 0.7423 |
| No log | 0.6625 | 210 | 0.5414 | 0.5693 | 0.5414 | 0.7358 |
| No log | 0.6688 | 212 | 0.5304 | 0.5811 | 0.5304 | 0.7283 |
| No log | 0.6751 | 214 | 0.4758 | 0.5939 | 0.4758 | 0.6898 |
| No log | 0.6814 | 216 | 0.4437 | 0.5481 | 0.4437 | 0.6661 |
| No log | 0.6877 | 218 | 0.4368 | 0.5673 | 0.4368 | 0.6609 |
| No log | 0.6940 | 220 | 0.4946 | 0.6281 | 0.4946 | 0.7033 |
| No log | 0.7003 | 222 | 0.4564 | 0.5958 | 0.4564 | 0.6756 |
| No log | 0.7066 | 224 | 0.4662 | 0.5795 | 0.4662 | 0.6828 |
| No log | 0.7129 | 226 | 0.5187 | 0.6018 | 0.5187 | 0.7202 |
| No log | 0.7192 | 228 | 0.5179 | 0.6018 | 0.5179 | 0.7196 |
| No log | 0.7256 | 230 | 0.4883 | 0.6011 | 0.4883 | 0.6988 |
| No log | 0.7319 | 232 | 0.4581 | 0.5898 | 0.4581 | 0.6768 |
| No log | 0.7382 | 234 | 0.5164 | 0.6064 | 0.5164 | 0.7186 |
| No log | 0.7445 | 236 | 0.4880 | 0.6120 | 0.4880 | 0.6986 |
| No log | 0.7508 | 238 | 0.4608 | 0.6049 | 0.4608 | 0.6788 |
| No log | 0.7571 | 240 | 0.5627 | 0.6490 | 0.5627 | 0.7502 |
| No log | 0.7634 | 242 | 0.8123 | 0.6725 | 0.8123 | 0.9013 |
| No log | 0.7697 | 244 | 0.6433 | 0.6624 | 0.6433 | 0.8021 |
| No log | 0.7760 | 246 | 0.4387 | 0.5914 | 0.4387 | 0.6624 |
| No log | 0.7823 | 248 | 0.4507 | 0.5951 | 0.4507 | 0.6713 |
| No log | 0.7886 | 250 | 0.6574 | 0.6299 | 0.6574 | 0.8108 |
| No log | 0.7950 | 252 | 0.9073 | 0.5748 | 0.9073 | 0.9525 |
| No log | 0.8013 | 254 | 0.7567 | 0.5976 | 0.7567 | 0.8699 |
| No log | 0.8076 | 256 | 0.4780 | 0.5993 | 0.4780 | 0.6914 |
| No log | 0.8139 | 258 | 0.4653 | 0.4804 | 0.4653 | 0.6821 |
| No log | 0.8202 | 260 | 0.4593 | 0.5099 | 0.4593 | 0.6777 |
| No log | 0.8265 | 262 | 0.5150 | 0.5981 | 0.5150 | 0.7176 |
| No log | 0.8328 | 264 | 0.7188 | 0.5631 | 0.7188 | 0.8478 |
| No log | 0.8391 | 266 | 0.6870 | 0.5665 | 0.6870 | 0.8289 |
| No log | 0.8454 | 268 | 0.5103 | 0.6082 | 0.5103 | 0.7144 |
| No log | 0.8517 | 270 | 0.4610 | 0.4952 | 0.4610 | 0.6790 |
| No log | 0.8580 | 272 | 0.5092 | 0.4066 | 0.5092 | 0.7136 |
| No log | 0.8644 | 274 | 0.4640 | 0.4861 | 0.4640 | 0.6812 |
| No log | 0.8707 | 276 | 0.4945 | 0.5916 | 0.4945 | 0.7032 |
| No log | 0.8770 | 278 | 0.6582 | 0.5572 | 0.6582 | 0.8113 |
| No log | 0.8833 | 280 | 0.6694 | 0.5610 | 0.6694 | 0.8181 |
| No log | 0.8896 | 282 | 0.5728 | 0.5254 | 0.5728 | 0.7568 |
| No log | 0.8959 | 284 | 0.5221 | 0.4152 | 0.5221 | 0.7226 |
| No log | 0.9022 | 286 | 0.4807 | 0.4751 | 0.4807 | 0.6933 |
| No log | 0.9085 | 288 | 0.4549 | 0.5473 | 0.4549 | 0.6745 |
| No log | 0.9148 | 290 | 0.4556 | 0.5597 | 0.4556 | 0.6750 |
| No log | 0.9211 | 292 | 0.4582 | 0.5556 | 0.4582 | 0.6769 |
| No log | 0.9274 | 294 | 0.4645 | 0.5505 | 0.4645 | 0.6816 |
| No log | 0.9338 | 296 | 0.4678 | 0.5381 | 0.4678 | 0.6840 |
| No log | 0.9401 | 298 | 0.4749 | 0.5534 | 0.4749 | 0.6892 |
| No log | 0.9464 | 300 | 0.5625 | 0.5975 | 0.5625 | 0.7500 |
| No log | 0.9527 | 302 | 0.5900 | 0.5826 | 0.5900 | 0.7681 |
| No log | 0.9590 | 304 | 0.4926 | 0.5950 | 0.4926 | 0.7019 |
| No log | 0.9653 | 306 | 0.4816 | 0.4778 | 0.4816 | 0.6940 |
| No log | 0.9716 | 308 | 0.4785 | 0.5246 | 0.4785 | 0.6917 |
| No log | 0.9779 | 310 | 0.4967 | 0.5915 | 0.4967 | 0.7048 |
| No log | 0.9842 | 312 | 0.4777 | 0.5359 | 0.4777 | 0.6912 |
| No log | 0.9905 | 314 | 0.5052 | 0.4469 | 0.5052 | 0.7108 |
| No log | 0.9968 | 316 | 0.4870 | 0.4692 | 0.4870 | 0.6978 |
| No log | 1.0032 | 318 | 0.4959 | 0.6014 | 0.4959 | 0.7042 |
| No log | 1.0095 | 320 | 0.5971 | 0.6622 | 0.5971 | 0.7727 |
| No log | 1.0158 | 322 | 0.6224 | 0.6527 | 0.6224 | 0.7889 |
| No log | 1.0221 | 324 | 0.5090 | 0.6125 | 0.5090 | 0.7134 |
| No log | 1.0284 | 326 | 0.4859 | 0.6161 | 0.4859 | 0.6970 |
| No log | 1.0347 | 328 | 0.5575 | 0.6373 | 0.5575 | 0.7466 |
| No log | 1.0410 | 330 | 0.6631 | 0.6354 | 0.6631 | 0.8143 |
| No log | 1.0473 | 332 | 0.7880 | 0.6128 | 0.7880 | 0.8877 |
| No log | 1.0536 | 334 | 0.6328 | 0.6471 | 0.6328 | 0.7955 |
| No log | 1.0599 | 336 | 0.4833 | 0.5926 | 0.4833 | 0.6952 |
| No log | 1.0662 | 338 | 0.4764 | 0.5915 | 0.4764 | 0.6902 |
| No log | 1.0726 | 340 | 0.4879 | 0.6097 | 0.4879 | 0.6985 |
| No log | 1.0789 | 342 | 0.5004 | 0.6328 | 0.5004 | 0.7074 |
| No log | 1.0852 | 344 | 0.4558 | 0.5696 | 0.4558 | 0.6752 |
| No log | 1.0915 | 346 | 0.4638 | 0.5143 | 0.4638 | 0.6811 |
| No log | 1.0978 | 348 | 0.4590 | 0.5340 | 0.4590 | 0.6775 |
| No log | 1.1041 | 350 | 0.4556 | 0.5999 | 0.4556 | 0.6750 |
| No log | 1.1104 | 352 | 0.4521 | 0.5984 | 0.4521 | 0.6724 |
| No log | 1.1167 | 354 | 0.4603 | 0.5902 | 0.4603 | 0.6784 |
| No log | 1.1230 | 356 | 0.5085 | 0.6098 | 0.5085 | 0.7131 |
| No log | 1.1293 | 358 | 0.5851 | 0.6319 | 0.5851 | 0.7649 |
| No log | 1.1356 | 360 | 0.5377 | 0.6091 | 0.5377 | 0.7333 |
| No log | 1.1420 | 362 | 0.4673 | 0.5626 | 0.4673 | 0.6836 |
| No log | 1.1483 | 364 | 0.4611 | 0.5643 | 0.4611 | 0.6790 |
| No log | 1.1546 | 366 | 0.4560 | 0.5333 | 0.4560 | 0.6753 |
| No log | 1.1609 | 368 | 0.4761 | 0.4842 | 0.4761 | 0.6900 |
| No log | 1.1672 | 370 | 0.4581 | 0.5306 | 0.4581 | 0.6768 |
| No log | 1.1735 | 372 | 0.4492 | 0.5837 | 0.4492 | 0.6702 |
| No log | 1.1798 | 374 | 0.4585 | 0.6097 | 0.4585 | 0.6771 |
| No log | 1.1861 | 376 | 0.4451 | 0.5503 | 0.4451 | 0.6672 |
| No log | 1.1924 | 378 | 0.4524 | 0.5227 | 0.4524 | 0.6726 |
| No log | 1.1987 | 380 | 0.4546 | 0.5008 | 0.4546 | 0.6742 |
| No log | 1.2050 | 382 | 0.4735 | 0.5442 | 0.4735 | 0.6881 |
| No log | 1.2114 | 384 | 0.5067 | 0.5698 | 0.5067 | 0.7118 |
| No log | 1.2177 | 386 | 0.4892 | 0.4913 | 0.4892 | 0.6994 |
| No log | 1.2240 | 388 | 0.4975 | 0.5099 | 0.4975 | 0.7053 |
| No log | 1.2303 | 390 | 0.6492 | 0.6296 | 0.6492 | 0.8057 |
| No log | 1.2366 | 392 | 0.7328 | 0.6114 | 0.7328 | 0.8561 |
| No log | 1.2429 | 394 | 0.5539 | 0.6157 | 0.5539 | 0.7443 |
| No log | 1.2492 | 396 | 0.5265 | 0.4173 | 0.5265 | 0.7256 |
| No log | 1.2555 | 398 | 0.6128 | 0.3532 | 0.6128 | 0.7828 |
| No log | 1.2618 | 400 | 0.5354 | 0.4003 | 0.5354 | 0.7317 |
| No log | 1.2681 | 402 | 0.4935 | 0.5464 | 0.4935 | 0.7025 |
| No log | 1.2744 | 404 | 0.5745 | 0.6324 | 0.5745 | 0.7579 |
| No log | 1.2808 | 406 | 0.5167 | 0.6236 | 0.5167 | 0.7188 |
| No log | 1.2871 | 408 | 0.4620 | 0.5427 | 0.4620 | 0.6797 |
| No log | 1.2934 | 410 | 0.4585 | 0.5055 | 0.4585 | 0.6772 |
| No log | 1.2997 | 412 | 0.4691 | 0.5926 | 0.4691 | 0.6849 |
| No log | 1.3060 | 414 | 0.5962 | 0.6760 | 0.5962 | 0.7722 |
| No log | 1.3123 | 416 | 0.5452 | 0.6593 | 0.5452 | 0.7384 |
| No log | 1.3186 | 418 | 0.4661 | 0.6018 | 0.4661 | 0.6827 |
| No log | 1.3249 | 420 | 0.4503 | 0.5347 | 0.4503 | 0.6710 |
| No log | 1.3312 | 422 | 0.4594 | 0.5752 | 0.4594 | 0.6778 |
| No log | 1.3375 | 424 | 0.5623 | 0.6484 | 0.5623 | 0.7499 |
| No log | 1.3438 | 426 | 0.5562 | 0.6429 | 0.5562 | 0.7458 |
| No log | 1.3502 | 428 | 0.4545 | 0.5922 | 0.4545 | 0.6742 |
| No log | 1.3565 | 430 | 0.4446 | 0.5818 | 0.4446 | 0.6668 |
| No log | 1.3628 | 432 | 0.5001 | 0.6472 | 0.5001 | 0.7072 |
| No log | 1.3691 | 434 | 0.5172 | 0.6548 | 0.5172 | 0.7192 |
| No log | 1.3754 | 436 | 0.4511 | 0.5994 | 0.4511 | 0.6716 |
| No log | 1.3817 | 438 | 0.4721 | 0.5433 | 0.4721 | 0.6871 |
| No log | 1.3880 | 440 | 0.4686 | 0.6124 | 0.4686 | 0.6846 |
| No log | 1.3943 | 442 | 0.5272 | 0.6602 | 0.5272 | 0.7261 |
| No log | 1.4006 | 444 | 0.4777 | 0.6232 | 0.4777 | 0.6912 |
| No log | 1.4069 | 446 | 0.4745 | 0.4864 | 0.4745 | 0.6888 |
| No log | 1.4132 | 448 | 0.4813 | 0.4603 | 0.4813 | 0.6938 |
| No log | 1.4196 | 450 | 0.4566 | 0.5352 | 0.4566 | 0.6757 |
| No log | 1.4259 | 452 | 0.5087 | 0.6295 | 0.5087 | 0.7132 |
| No log | 1.4322 | 454 | 0.5272 | 0.6279 | 0.5272 | 0.7261 |
| No log | 1.4385 | 456 | 0.4695 | 0.5742 | 0.4695 | 0.6852 |
| No log | 1.4448 | 458 | 0.4613 | 0.5300 | 0.4613 | 0.6792 |
| No log | 1.4511 | 460 | 0.4807 | 0.4327 | 0.4807 | 0.6933 |
| No log | 1.4574 | 462 | 0.4712 | 0.4831 | 0.4712 | 0.6865 |
| No log | 1.4637 | 464 | 0.5262 | 0.6207 | 0.5262 | 0.7254 |
| No log | 1.4700 | 466 | 0.5679 | 0.6533 | 0.5679 | 0.7536 |
| No log | 1.4763 | 468 | 0.4943 | 0.6319 | 0.4943 | 0.7030 |
| No log | 1.4826 | 470 | 0.4548 | 0.5373 | 0.4548 | 0.6744 |
| No log | 1.4890 | 472 | 0.4529 | 0.5669 | 0.4529 | 0.6730 |
| No log | 1.4953 | 474 | 0.4979 | 0.6578 | 0.4979 | 0.7056 |
| No log | 1.5016 | 476 | 0.5480 | 0.6783 | 0.5480 | 0.7402 |
| No log | 1.5079 | 478 | 0.4760 | 0.5831 | 0.4760 | 0.6900 |
| No log | 1.5142 | 480 | 0.4790 | 0.4885 | 0.4790 | 0.6921 |
| No log | 1.5205 | 482 | 0.4733 | 0.4948 | 0.4733 | 0.6879 |
| No log | 1.5268 | 484 | 0.4930 | 0.6107 | 0.4930 | 0.7021 |
| No log | 1.5331 | 486 | 0.6387 | 0.6998 | 0.6387 | 0.7992 |
| No log | 1.5394 | 488 | 0.5770 | 0.6947 | 0.5770 | 0.7596 |
| No log | 1.5457 | 490 | 0.4507 | 0.5730 | 0.4507 | 0.6713 |
| No log | 1.5521 | 492 | 0.4761 | 0.4890 | 0.4761 | 0.6900 |
| No log | 1.5584 | 494 | 0.4524 | 0.5010 | 0.4524 | 0.6726 |
| No log | 1.5647 | 496 | 0.4512 | 0.5824 | 0.4512 | 0.6717 |
| No log | 1.5710 | 498 | 0.5386 | 0.6594 | 0.5386 | 0.7339 |
| 0.5 | 1.5773 | 500 | 0.5441 | 0.6588 | 0.5441 | 0.7376 |
| 0.5 | 1.5836 | 502 | 0.5217 | 0.6468 | 0.5217 | 0.7223 |
| 0.5 | 1.5899 | 504 | 0.4504 | 0.5555 | 0.4504 | 0.6711 |
| 0.5 | 1.5962 | 506 | 0.4459 | 0.5713 | 0.4459 | 0.6677 |
| 0.5 | 1.6025 | 508 | 0.4642 | 0.6069 | 0.4642 | 0.6813 |
| 0.5 | 1.6088 | 510 | 0.4950 | 0.6411 | 0.4950 | 0.7035 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF | mattritchey | 2024-11-06T16:14:40Z | 7 | 0 | null | [
"gguf",
"HelpingAI",
"Emotionally-Intelligent",
"EQ-focused- EQ-focused",
"Conversational",
"SLM",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:HelpingAI/HelpingAI2-3B",
"base_model:quantized:HelpingAI/HelpingAI2-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-06T16:14:29Z | ---
license: other
license_name: helpingai
license_link: https://huggingface.co/OEvortex/HelpingAI-3B-v3/blob/main/LICENSE.md
pipeline_tag: text-generation
language:
- en
tags:
- HelpingAI
- Emotionally-Intelligent
- EQ-focused- EQ-focused
- Conversational
- SLM
- llama-cpp
- gguf-my-repo
base_model: OEvortex/HelpingAI-3B-reloaded
---
# mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF
This model was converted to GGUF format from [`OEvortex/HelpingAI-3B-reloaded`](https://huggingface.co/OEvortex/HelpingAI-3B-reloaded) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OEvortex/HelpingAI-3B-reloaded) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mattritchey/HelpingAI-3B-reloaded-Q4_K_M-GGUF --hf-file helpingai-3b-reloaded-q4_k_m.gguf -c 2048
```
|
mradermacher/BrokenKeyboard-GGUF | mradermacher | 2024-11-06T16:12:30Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:dhanushreddy29/BrokenKeyboard",
"base_model:quantized:dhanushreddy29/BrokenKeyboard",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T15:10:59Z | ---
base_model: dhanushreddy29/BrokenKeyboard
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dhanushreddy29/BrokenKeyboard
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BrokenKeyboard-GGUF/resolve/main/BrokenKeyboard.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
projecte-aina/aina-translator-pt-ca | projecte-aina | 2024-11-06T16:06:15Z | 4 | 0 | fairseq | [
"fairseq",
"pt",
"ca",
"dataset:projecte-aina/CA-PT_Parallel_Corpus",
"doi:10.57967/hf/1931",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T15:12:42Z | ---
license: apache-2.0
datasets:
- projecte-aina/CA-PT_Parallel_Corpus
language:
- pt
- ca
metrics:
- bleu
library_name: fairseq
---
## Projecte Aina’s Portuguese-Catalan machine translation model
## Model description
This model was trained from scratch using the Fairseq toolkit on a combination of datasets comprising both Catalan-Portuguese data sourced from Opus, and additional datasets where synthetic Catalan was generated from the Spanish side of Spanish-Portuguese corpora using Projecte Aina’s Spanish-Catalan model. This gave a total of approximately 100 million sentence pairs. The model is evaluated on the Flores, NTEU and NTREX evaluation sets.
## Intended uses and limitations
You can use this model for machine translation from Portuguese to Catalan.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-pt-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Bem-vindo ao Projeto Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
The model was trained on a combination of the following datasets:
| Datasets |
|----------------------|
| DGT |
|EU Bookshop |
| Europarl |
|Global Voices |
| GNOME |
|KDE 4 |
| Multi CCAligned |
| Multi Paracrawl |
| Multi UN |
| NLLB |
| NTEU |
| Open Subtitles |
|Tatoeba |
|UNPC |
| WikiMatrix |
All data was sourced from [OPUS](https://opus.nlpl.eu/) and [ELRC](https://www.elrc-share.eu/) After all Catalan-Portuguese data had been collected, Spanish-Portuguese data was collected and the Spanish data translated to Catalan using [Projecte Aina’s Spanish-Catalan model.](https://huggingface.co/projecte-aina/aina-translator-es-ca)
### Training procedure
### Data preparation
All datasets are deduplicated, filtered for language identification, and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of 6.159.631 and before training the punctuation is normalized using a
modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
#### Tokenization
All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data.
This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparameters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_big |
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 48.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 5e-4 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 8000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The model was trained for a total of 12.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on the [Flores-101](https://github.com/facebookresearch/flores) and
[NTREX](https://github.com/MicrosoftTranslator/NTREX) test sets.
### Evaluation results
Below are the evaluation results on the machine translation from Portuguese to Catalan compared to [Softcatalà](https://www.softcatala.org/) and
[Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate | aina-translator-pt-ca |
|----------------------|------------|------------------|---------------|
| Flores 101 dev | 32 | **38,3** | 35,8 |
| Flores 101 devtest |33,4 | **39** | 37,1 |
| NTEU | 41,6 | 44,9 | **48,3** |
| NTREX | 28,8 | **33,6** | 32,1 |
| **Average** | 33,9 | **38,9** | 38,3 |
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details> |
camidenecken/RoBERTa-RM1-v2-2-rm-v31 | camidenecken | 2024-11-06T16:05:57Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T16:05:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-2-rm-v29 | camidenecken | 2024-11-06T16:01:41Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T16:01:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-2-rm-v26 | camidenecken | 2024-11-06T15:55:15Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:54:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-2-rm-v25 | camidenecken | 2024-11-06T15:53:07Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:52:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
richiebailey/whisper-large-v3-turbo | richiebailey | 2024-11-06T15:44:40Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-06T15:37:29Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
license: mit
tags:
- audio
- automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
base_model:
- openai/whisper-large-v3
library_name: transformers
---
# Whisper
Whisper is a state-of-the-art model for automatic speech recognition (ASR) and speech translation, proposed in the paper
[Robust Speech Recognition via Large-Scale Weak Supervision](https://huggingface.co/papers/2212.04356) by Alec Radford
et al. from OpenAI. Trained on >5M hours of labeled data, Whisper demonstrates a strong ability to generalise to many
datasets and domains in a zero-shot setting.
Whisper large-v3-turbo is a finetuned version of a pruned [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3). In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4.
As a result, the model is way faster, at the expense of a minor quality degradation. You can find more details about it [in this GitHub discussion](https://github.com/openai/whisper/discussions/2363).
**Disclaimer**: Content for this model card has partly been written by the 🤗 Hugging Face team, and partly copied and
pasted from the original model card.
## Usage
Whisper large-v3-turbo is supported in Hugging Face 🤗 Transformers. To run the model, first install the Transformers
library. For this example, we'll also install 🤗 Datasets to load toy audio dataset from the Hugging Face Hub, and
🤗 Accelerate to reduce the model loading time:
```bash
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] accelerate
```
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe audios of arbitrary length:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```python
result = pipe("audio.mp3")
```
Multiple audio files can be transcribed in parallel by specifying them as a list and setting the `batch_size` parameter:
```python
result = pipe(["audio_1.mp3", "audio_2.mp3"], batch_size=2)
```
Transformers is compatible with all Whisper decoding strategies, such as temperature fallback and condition on previous
tokens. The following example demonstrates how to enable these heuristics:
```python
generate_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
result = pipe(sample, generate_kwargs=generate_kwargs)
```
Whisper predicts the language of the source audio automatically. If the source audio language is known *a-priori*, it
can be passed as an argument to the pipeline:
```python
result = pipe(sample, generate_kwargs={"language": "english"})
```
By default, Whisper performs the task of *speech transcription*, where the source audio language is the same as the target
text language. To perform *speech translation*, where the target text is in English, set the task to `"translate"`:
```python
result = pipe(sample, generate_kwargs={"task": "translate"})
```
Finally, the model can be made to predict timestamps. For sentence-level timestamps, pass the `return_timestamps` argument:
```python
result = pipe(sample, return_timestamps=True)
print(result["chunks"])
```
And for word-level timestamps:
```python
result = pipe(sample, return_timestamps="word")
print(result["chunks"])
```
The above arguments can be used in isolation or in combination. For example, to perform the task of speech transcription
where the source audio is in French, and we want to return sentence-level timestamps, the following can be used:
```python
result = pipe(sample, return_timestamps=True, generate_kwargs={"language": "french", "task": "translate"})
print(result["chunks"])
```
<details>
<summary> For more control over the generation parameters, use the model + processor API directly: </summary>
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
from datasets import Audio, load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", Audio(processor.feature_extractor.sampling_rate))
sample = dataset[0]["audio"]
inputs = processor(
sample["array"],
sampling_rate=sample["sampling_rate"],
return_tensors="pt",
truncation=False,
padding="longest",
return_attention_mask=True,
)
inputs = inputs.to(device, dtype=torch_dtype)
gen_kwargs = {
"max_new_tokens": 448,
"num_beams": 1,
"condition_on_prev_tokens": False,
"compression_ratio_threshold": 1.35, # zlib compression ratio threshold (in token space)
"temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
"logprob_threshold": -1.0,
"no_speech_threshold": 0.6,
"return_timestamps": True,
}
pred_ids = model.generate(**inputs, **gen_kwargs)
pred_text = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=False)
print(pred_text)
```
</details>
## Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to Whisper to further reduce the inference speed and VRAM
requirements.
### Chunked Long-Form
Whisper has a receptive field of 30-seconds. To transcribe audios longer than this, one of two long-form algorithms are
required:
1. **Sequential:** uses a "sliding window" for buffered inference, transcribing 30-second slices one after the other
2. **Chunked:** splits long audio files into shorter ones (with a small overlap between segments), transcribes each segment independently, and stitches the resulting transcriptions at the boundaries
The sequential long-form algorithm should be used in either of the following scenarios:
1. Transcription accuracy is the most important factor, and speed is less of a consideration
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
Conversely, the chunked algorithm should be used when:
1. Transcription speed is the most important factor
2. You are transcribing a **single** long audio file
By default, Transformers uses the sequential algorithm. To enable the chunked algorithm, pass the `chunk_length_s`
parameter to the `pipeline`. For large-v3, a chunk length of 30-seconds is optimal. To activate batching over long
audio files, pass the argument `batch_size`:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
batch_size=16, # batch size for inference - set based on your device
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
#### Torch compile
The Whisper forward pass is compatible with [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html)
for 4.5x speed-ups.
**Note:** `torch.compile` is currently not compatible with the Chunked long-form algorithm or Flash Attention 2 ⚠️
```python
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
from tqdm import tqdm
torch.set_float32_matmul_precision("high")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3-turbo"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
).to(device)
# Enable static cache and compile the forward pass
model.generation_config.cache_implementation = "static"
model.generation_config.max_new_tokens = 256
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
# 2 warmup steps
for _ in tqdm(range(2), desc="Warm-up step"):
with sdpa_kernel(SDPBackend.MATH):
result = pipe(sample.copy(), generate_kwargs={"min_new_tokens": 256, "max_new_tokens": 256})
# fast run
with sdpa_kernel(SDPBackend.MATH):
result = pipe(sample.copy())
print(result["text"])
```
#### Flash Attention 2
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU supports it and you are not using [torch.compile](#torch-compile).
To do so, first install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
```
pip install flash-attn --no-build-isolation
```
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
```python
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="flash_attention_2")
```
#### Torch Scale-Product-Attention (SDPA)
If your GPU does not support Flash Attention, we recommend making use of PyTorch [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html).
This attention implementation is activated **by default** for PyTorch versions 2.1.1 or greater. To check
whether you have a compatible PyTorch version, run the following Python code snippet:
```python
from transformers.utils import is_torch_sdpa_available
print(is_torch_sdpa_available())
```
If the above returns `True`, you have a valid version of PyTorch installed and SDPA is activated by default. If it
returns `False`, you need to upgrade your PyTorch version according to the [official instructions](https://pytorch.org/get-started/locally/)
Once a valid PyTorch version is installed, SDPA is activated by default. It can also be set explicitly by specifying
`attn_implementation="sdpa"` as follows:
```python
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="sdpa")
```
For more information about how to use the SDPA refer to the [Transformers SDPA documentation](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention).
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. There are two
flavours of Whisper model: English-only and multilingual. The English-only models were trained on the task of English
speech recognition. The multilingual models were trained simultaneously on multilingual speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech
translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes. The smallest four are available as English-only
and multilingual. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
| large-v3 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v3) |
| large-v3-turbo | 809 M | x | [✓](https://huggingface.co/openai/whisper-large-v3-turbo) |
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
No information provided.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
mradermacher/alfred-40b-1023-i1-GGUF | mradermacher | 2024-11-06T15:41:09Z | 106 | 0 | transformers | [
"transformers",
"gguf",
"falcon-40b",
"long-context",
"falcon",
"NTK-YaRN",
"en",
"fr",
"de",
"es",
"it",
"dataset:OpenAssistant/oasst1",
"dataset:ehartford/dolphin",
"dataset:tau/sled",
"dataset:tiiuae/falcon-refinedweb",
"base_model:lightonai/alfred-40b-1023",
"base_model:quantized:lightonai/alfred-40b-1023",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-06T07:03:20Z | ---
base_model: lightonai/alfred-40b-1023
datasets:
- OpenAssistant/oasst1
- ehartford/dolphin
- tau/sled
- tiiuae/falcon-refinedweb
language:
- en
- fr
- de
- es
- it
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- falcon-40b
- long-context
- falcon
- NTK-YaRN
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lightonai/alfred-40b-1023
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/alfred-40b-1023-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ1_S.gguf) | i1-IQ1_S | 9.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ1_M.gguf) | i1-IQ1_M | 10.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_S.gguf) | i1-IQ2_S | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ2_M.gguf) | i1-IQ2_M | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q2_K.gguf) | i1-Q2_K | 15.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 16.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_XS.gguf) | i1-IQ3_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_S.gguf) | i1-IQ3_S | 18.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_S.gguf) | i1-Q3_K_S | 18.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ3_M.gguf) | i1-IQ3_M | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_M.gguf) | i1-Q3_K_M | 20.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q3_K_L.gguf) | i1-Q3_K_L | 21.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-IQ4_XS.gguf) | i1-IQ4_XS | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_K_S.gguf) | i1-Q4_K_S | 23.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_0.gguf) | i1-Q4_0 | 24.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q4_K_M.gguf) | i1-Q4_K_M | 25.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q5_K_S.gguf) | i1-Q5_K_S | 29.1 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q5_K_M.gguf) | i1-Q5_K_M | 30.7 | |
| [GGUF](https://huggingface.co/mradermacher/alfred-40b-1023-i1-GGUF/resolve/main/alfred-40b-1023.i1-Q6_K.gguf) | i1-Q6_K | 34.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/pie-all-uncon-13b-GGUF | mradermacher | 2024-11-06T15:40:11Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LearningOpt/pie-all-uncon-13b",
"base_model:quantized:LearningOpt/pie-all-uncon-13b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T14:13:18Z | ---
base_model: LearningOpt/pie-all-uncon-13b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LearningOpt/pie-all-uncon-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pie-all-uncon-13b-GGUF/resolve/main/pie-all-uncon-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/pythia-6.9b-deduped-int8-step4-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-06T15:39:04Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-06T15:37:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
exala/db_aca2_4.5 | exala | 2024-11-06T15:38:35Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:38:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/blossom-v4-yi-34b-i1-GGUF | mradermacher | 2024-11-06T15:37:08Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"dataset:Azure99/blossom-chat-v2",
"dataset:Azure99/blossom-math-v3",
"dataset:Azure99/blossom-wizard-v2",
"dataset:Azure99/blossom-orca-v2",
"base_model:Azure99/blossom-v4-yi-34b",
"base_model:quantized:Azure99/blossom-v4-yi-34b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-06T10:01:25Z | ---
base_model: Azure99/blossom-v4-yi-34b
datasets:
- Azure99/blossom-chat-v2
- Azure99/blossom-math-v3
- Azure99/blossom-wizard-v2
- Azure99/blossom-orca-v2
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Azure99/blossom-v4-yi-34b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/blossom-v4-yi-34b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v4-yi-34b-i1-GGUF/resolve/main/blossom-v4-yi-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF | mradermacher | 2024-11-06T15:35:12Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-06T06:04:52Z | ---
base_model: Trendyol/Trendyol-LLM-7b-chat-v0.1
language:
- tr
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 5.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
QuantFactory/KONI-Llama3.1-8B-Instruct-20241024-GGUF | QuantFactory | 2024-11-06T15:34:48Z | 105 | 3 | transformers | [
"transformers",
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T14:40:16Z |
---
library_name: transformers
tags: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/KONI-Llama3.1-8B-Instruct-20241024-GGUF
This is quantized version of [KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024) created using llama.cpp
# Original Model Card
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
techiaith/whisper-large-v3-ft-cv-cy | techiaith | 2024-11-06T15:32:39Z | 10 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"automatic-speech-recognition",
"cy",
"dataset:techiaith/commonvoice_18_0_cy",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2024-08-26T11:24:16Z | ---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
- whisper
datasets:
- techiaith/commonvoice_18_0_cy
metrics:
- wer
model-index:
- name: whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: DewiBrynJones/commonvoice_18_0_cy default
type: DewiBrynJones/commonvoice_18_0_cy
args: default
metrics:
- name: Wer
type: wer
value: 0.185
language:
- cy
pipeline_tag: automatic-speech-recognition
---
# whisper-large-v3-ft-cv-cy
This model is a version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) fine-tuned with the
`train_all` and `other_with_excluded` custom splits from [techiaith/commonvoice_18_0_cy](https://huggingface.co/datasets/techiaith/commonvoice_18_0_cy)
It achieves the following results on the Common Voice for Welsh release 18's standard test set:
- WER: 18.50
- CER: 5.32
N.B. this model performs considerably worse on English language speech, but better on Welsh than a [bilingual model](https://huggingface.co/techiaith/whisper-large-v3-ft-cv-cy-en)
## Usage
```python
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="techiaith/whisper-large-v3-ft-cv-cy")
result = transcriber(<path or url to soundfile>)
print (result)
```
`{'text': 'Mae hen wlad fy nhadau yn annwyl i mi.'}` |
Tippawan/pr-corrected-v8 | Tippawan | 2024-11-06T15:26:40Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-06T15:26:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mav23/SmolLM2-1.7B-GGUF | mav23 | 2024-11-06T15:25:15Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T15:09:39Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](#model-summary)
2. [Evaluation](#evaluation)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
```bash
pip install transformers
```
#### Running the model on CPU/GPU/multi GPU
* _Using full precision_
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 3422.76 MB
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B |
|------------------|--------------|-------------|---------------|--------------|
| HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 |
| ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 |
| PIQA | **77.6** | 74.8 | 76.1 | 76.0 |
| MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 |
| CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 |
| TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 |
| Winogrande | **59.4** | 57.8 | 59.3 | 54.7 |
| OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** |
| GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 |
## Instruction Model
| Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:|
| IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 |
| MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 |
| OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN |
| HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 |
| ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 |
| PIQA | **74.4** | 72.3 | 73.2 | 71.6 |
| MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 |
| BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 |
| GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 256 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
``` |
Buyforhonor/jonyb | Buyforhonor | 2024-11-06T15:24:10Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-06T14:37:59Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jonyb
---
# Jonyb
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jonyb` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Buyforhonor/jonyb', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ihanif/whisper-small-tunning-v1 | ihanif | 2024-11-06T15:22:05Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ps",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-05T13:13:17Z | ---
library_name: transformers
language:
- ps
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small - Hanif Rahman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ps
split: test
args: 'config: ps, split: test'
metrics:
- name: Wer
type: wer
value: 47.980613893376415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small - Hanif Rahman
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8094
- Wer Ortho: 51.6855
- Wer: 47.9806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.6754 | 0.9346 | 100 | 0.6689 | 62.1021 | 58.4888 |
| 0.4477 | 1.8692 | 200 | 0.6215 | 57.3134 | 53.5101 |
| 0.2243 | 2.8037 | 300 | 0.6222 | 55.8883 | 52.0928 |
| 0.0949 | 3.7383 | 400 | 0.6822 | 54.6007 | 49.6989 |
| 0.0448 | 4.6729 | 500 | 0.7240 | 53.5301 | 49.4346 |
| 0.0201 | 5.6075 | 600 | 0.7355 | 52.7344 | 48.9646 |
| 0.0124 | 6.5421 | 700 | 0.7615 | 52.3944 | 48.6929 |
| 0.0035 | 7.4766 | 800 | 0.7868 | 51.0778 | 47.2243 |
| 0.002 | 8.4112 | 900 | 0.8025 | 51.6276 | 47.6869 |
| 0.0011 | 9.3458 | 1000 | 0.8094 | 51.6855 | 47.9806 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
milka1g/esm2_t33_650M_UR50D-finetuned | milka1g | 2024-11-06T15:21:16Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t33_650M_UR50D",
"base_model:finetune:facebook/esm2_t33_650M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:02:09Z | ---
library_name: transformers
license: mit
base_model: facebook/esm2_t33_650M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: esm2_t33_650M_UR50D-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t33_650M_UR50D-finetuned
This model is a fine-tuned version of [facebook/esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) on a task of predicting toxicity of protein sequences whether some protein is toxic (1) or non-toxic (0).
It achieves the following results on the evaluation set:
- Loss: 0.4409
- Tp: 539
- Tn: 617
- Fp: 47
- Fn: 93
- Accuracy: 0.8920
- Precision: 0.9198
- Recall: 0.8528
- F1-score: 0.8851
- Auc: 0.8910
- Mcc: 0.7854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Tp | Tn | Fp | Fn | Accuracy | Precision | Recall | F1-score | Auc | Mcc |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|:--:|:---:|:--------:|:---------:|:------:|:--------:|:------:|:------:|
| 0.393 | 1.0 | 1296 | 0.3616 | 507 | 615 | 49 | 125 | 0.8657 | 0.9119 | 0.8022 | 0.8535 | 0.8642 | 0.7356 |
| 0.3052 | 2.0 | 2592 | 0.3159 | 536 | 608 | 56 | 96 | 0.8827 | 0.9054 | 0.8481 | 0.8758 | 0.8819 | 0.7664 |
| 0.166 | 3.0 | 3888 | 0.4409 | 539 | 617 | 47 | 93 | 0.8920 | 0.9198 | 0.8528 | 0.8851 | 0.8910 | 0.7854 |
### Framework versions
- Transformers 4.45.2
- Pytorch 1.13.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Youlln/ECE-PRYMMAL-YL-7B-SLERP-V4 | Youlln | 2024-11-06T15:16:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T14:59:27Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ASAP_FineTuningBERT_Aug_k20_task1_organization_fold2 | MayBashendy | 2024-11-06T15:09:50Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T14:34:22Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k20_task1_organization_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k20_task1_organization_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5477
- Qwk: 0.6224
- Mse: 0.5477
- Rmse: 0.7400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0061 | 2 | 10.3979 | 0.0 | 10.3979 | 3.2246 |
| No log | 0.0123 | 4 | 8.7846 | 0.0017 | 8.7846 | 2.9639 |
| No log | 0.0184 | 6 | 7.0670 | 0.0023 | 7.0670 | 2.6584 |
| No log | 0.0245 | 8 | 5.6048 | 0.0 | 5.6048 | 2.3674 |
| No log | 0.0307 | 10 | 4.5908 | 0.0 | 4.5908 | 2.1426 |
| No log | 0.0368 | 12 | 3.6199 | 0.0452 | 3.6199 | 1.9026 |
| No log | 0.0429 | 14 | 2.8005 | 0.0078 | 2.8005 | 1.6735 |
| No log | 0.0491 | 16 | 2.2260 | 0.0039 | 2.2260 | 1.4920 |
| No log | 0.0552 | 18 | 1.6601 | 0.0039 | 1.6601 | 1.2884 |
| No log | 0.0613 | 20 | 1.2707 | 0.1300 | 1.2707 | 1.1273 |
| No log | 0.0675 | 22 | 1.0051 | 0.0345 | 1.0051 | 1.0026 |
| No log | 0.0736 | 24 | 0.8565 | 0.0107 | 0.8565 | 0.9254 |
| No log | 0.0798 | 26 | 0.7876 | 0.0107 | 0.7876 | 0.8874 |
| No log | 0.0859 | 28 | 0.7741 | 0.0107 | 0.7741 | 0.8798 |
| No log | 0.0920 | 30 | 0.7607 | 0.0107 | 0.7607 | 0.8722 |
| No log | 0.0982 | 32 | 0.8793 | 0.0107 | 0.8793 | 0.9377 |
| No log | 0.1043 | 34 | 0.7591 | 0.0275 | 0.7591 | 0.8712 |
| No log | 0.1104 | 36 | 0.8418 | 0.3565 | 0.8418 | 0.9175 |
| No log | 0.1166 | 38 | 0.7564 | 0.0246 | 0.7564 | 0.8697 |
| No log | 0.1227 | 40 | 0.7600 | 0.0107 | 0.7600 | 0.8718 |
| No log | 0.1288 | 42 | 0.8162 | 0.0327 | 0.8162 | 0.9034 |
| No log | 0.1350 | 44 | 0.9592 | 0.0 | 0.9592 | 0.9794 |
| No log | 0.1411 | 46 | 1.0754 | 0.0 | 1.0754 | 1.0370 |
| No log | 0.1472 | 48 | 0.9586 | 0.0 | 0.9586 | 0.9791 |
| No log | 0.1534 | 50 | 0.8507 | 0.0 | 0.8507 | 0.9223 |
| No log | 0.1595 | 52 | 0.8100 | 0.0078 | 0.8100 | 0.9000 |
| No log | 0.1656 | 54 | 0.7948 | 0.0107 | 0.7948 | 0.8915 |
| No log | 0.1718 | 56 | 0.7620 | 0.0556 | 0.7620 | 0.8729 |
| No log | 0.1779 | 58 | 0.7691 | 0.1869 | 0.7691 | 0.8770 |
| No log | 0.1840 | 60 | 0.7401 | 0.0156 | 0.7401 | 0.8603 |
| No log | 0.1902 | 62 | 0.8077 | 0.3592 | 0.8077 | 0.8987 |
| No log | 0.1963 | 64 | 0.8076 | 0.4068 | 0.8076 | 0.8987 |
| No log | 0.2025 | 66 | 0.7194 | 0.0107 | 0.7194 | 0.8482 |
| No log | 0.2086 | 68 | 0.7352 | 0.0449 | 0.7352 | 0.8575 |
| No log | 0.2147 | 70 | 0.6958 | 0.0280 | 0.6958 | 0.8341 |
| No log | 0.2209 | 72 | 0.7091 | 0.1405 | 0.7091 | 0.8421 |
| No log | 0.2270 | 74 | 0.7145 | 0.0764 | 0.7145 | 0.8453 |
| No log | 0.2331 | 76 | 0.7052 | 0.0343 | 0.7052 | 0.8397 |
| No log | 0.2393 | 78 | 0.6916 | 0.0117 | 0.6916 | 0.8316 |
| No log | 0.2454 | 80 | 0.6545 | 0.1105 | 0.6545 | 0.8090 |
| No log | 0.2515 | 82 | 0.6297 | 0.3488 | 0.6297 | 0.7935 |
| No log | 0.2577 | 84 | 0.5875 | 0.2975 | 0.5875 | 0.7665 |
| No log | 0.2638 | 86 | 0.5733 | 0.3862 | 0.5733 | 0.7571 |
| No log | 0.2699 | 88 | 0.5875 | 0.4321 | 0.5875 | 0.7665 |
| No log | 0.2761 | 90 | 0.5558 | 0.4178 | 0.5558 | 0.7455 |
| No log | 0.2822 | 92 | 0.5483 | 0.3694 | 0.5483 | 0.7405 |
| No log | 0.2883 | 94 | 0.5802 | 0.4609 | 0.5802 | 0.7617 |
| No log | 0.2945 | 96 | 0.5814 | 0.4641 | 0.5814 | 0.7625 |
| No log | 0.3006 | 98 | 0.5944 | 0.4698 | 0.5944 | 0.7710 |
| No log | 0.3067 | 100 | 0.5912 | 0.4270 | 0.5912 | 0.7689 |
| No log | 0.3129 | 102 | 0.5951 | 0.4307 | 0.5951 | 0.7715 |
| No log | 0.3190 | 104 | 0.7027 | 0.4338 | 0.7027 | 0.8382 |
| No log | 0.3252 | 106 | 0.6867 | 0.4078 | 0.6867 | 0.8287 |
| No log | 0.3313 | 108 | 0.6111 | 0.3126 | 0.6111 | 0.7817 |
| No log | 0.3374 | 110 | 0.6397 | 0.3805 | 0.6397 | 0.7998 |
| No log | 0.3436 | 112 | 0.6431 | 0.3192 | 0.6431 | 0.8019 |
| No log | 0.3497 | 114 | 0.6764 | 0.4154 | 0.6764 | 0.8224 |
| No log | 0.3558 | 116 | 0.7467 | 0.4027 | 0.7467 | 0.8641 |
| No log | 0.3620 | 118 | 0.6178 | 0.4502 | 0.6178 | 0.7860 |
| No log | 0.3681 | 120 | 0.5557 | 0.3214 | 0.5557 | 0.7455 |
| No log | 0.3742 | 122 | 0.5434 | 0.3790 | 0.5434 | 0.7371 |
| No log | 0.3804 | 124 | 0.6355 | 0.4301 | 0.6355 | 0.7972 |
| No log | 0.3865 | 126 | 0.8129 | 0.3902 | 0.8129 | 0.9016 |
| No log | 0.3926 | 128 | 0.7482 | 0.3927 | 0.7482 | 0.8650 |
| No log | 0.3988 | 130 | 0.5857 | 0.3195 | 0.5857 | 0.7653 |
| No log | 0.4049 | 132 | 0.6166 | 0.1864 | 0.6166 | 0.7852 |
| No log | 0.4110 | 134 | 0.5925 | 0.2897 | 0.5925 | 0.7697 |
| No log | 0.4172 | 136 | 0.6668 | 0.4111 | 0.6668 | 0.8166 |
| No log | 0.4233 | 138 | 0.6246 | 0.4229 | 0.6246 | 0.7903 |
| No log | 0.4294 | 140 | 0.5774 | 0.2873 | 0.5774 | 0.7598 |
| No log | 0.4356 | 142 | 0.6020 | 0.1867 | 0.6020 | 0.7759 |
| No log | 0.4417 | 144 | 0.5802 | 0.2715 | 0.5802 | 0.7617 |
| No log | 0.4479 | 146 | 0.6589 | 0.4145 | 0.6589 | 0.8117 |
| No log | 0.4540 | 148 | 0.7342 | 0.4142 | 0.7342 | 0.8568 |
| No log | 0.4601 | 150 | 0.6586 | 0.4049 | 0.6586 | 0.8115 |
| No log | 0.4663 | 152 | 0.5947 | 0.2276 | 0.5947 | 0.7712 |
| No log | 0.4724 | 154 | 0.7040 | 0.1425 | 0.7040 | 0.8390 |
| No log | 0.4785 | 156 | 0.7049 | 0.1608 | 0.7049 | 0.8396 |
| No log | 0.4847 | 158 | 0.6022 | 0.1997 | 0.6022 | 0.7760 |
| No log | 0.4908 | 160 | 0.5931 | 0.4110 | 0.5931 | 0.7702 |
| No log | 0.4969 | 162 | 0.6248 | 0.4250 | 0.6248 | 0.7904 |
| No log | 0.5031 | 164 | 0.6884 | 0.4446 | 0.6884 | 0.8297 |
| No log | 0.5092 | 166 | 0.6566 | 0.3873 | 0.6566 | 0.8103 |
| No log | 0.5153 | 168 | 0.5594 | 0.4248 | 0.5594 | 0.7479 |
| No log | 0.5215 | 170 | 0.5354 | 0.3982 | 0.5354 | 0.7317 |
| No log | 0.5276 | 172 | 0.5575 | 0.4558 | 0.5575 | 0.7466 |
| No log | 0.5337 | 174 | 0.5632 | 0.4632 | 0.5632 | 0.7505 |
| No log | 0.5399 | 176 | 0.5302 | 0.3643 | 0.5302 | 0.7282 |
| No log | 0.5460 | 178 | 0.5466 | 0.3062 | 0.5466 | 0.7393 |
| No log | 0.5521 | 180 | 0.5303 | 0.3566 | 0.5303 | 0.7282 |
| No log | 0.5583 | 182 | 0.5491 | 0.4518 | 0.5491 | 0.7410 |
| No log | 0.5644 | 184 | 0.5397 | 0.4440 | 0.5397 | 0.7346 |
| No log | 0.5706 | 186 | 0.5431 | 0.4344 | 0.5431 | 0.7370 |
| No log | 0.5767 | 188 | 0.5454 | 0.4424 | 0.5454 | 0.7385 |
| No log | 0.5828 | 190 | 0.5929 | 0.4558 | 0.5929 | 0.7700 |
| No log | 0.5890 | 192 | 0.5805 | 0.4692 | 0.5805 | 0.7619 |
| No log | 0.5951 | 194 | 0.5165 | 0.4603 | 0.5165 | 0.7187 |
| No log | 0.6012 | 196 | 0.4927 | 0.4603 | 0.4927 | 0.7019 |
| No log | 0.6074 | 198 | 0.4988 | 0.4926 | 0.4988 | 0.7062 |
| No log | 0.6135 | 200 | 0.6043 | 0.5238 | 0.6043 | 0.7774 |
| No log | 0.6196 | 202 | 0.6205 | 0.5402 | 0.6205 | 0.7877 |
| No log | 0.6258 | 204 | 0.4832 | 0.4964 | 0.4832 | 0.6951 |
| No log | 0.6319 | 206 | 0.4540 | 0.5067 | 0.4540 | 0.6738 |
| No log | 0.6380 | 208 | 0.4552 | 0.5177 | 0.4552 | 0.6747 |
| No log | 0.6442 | 210 | 0.4557 | 0.5113 | 0.4557 | 0.6750 |
| No log | 0.6503 | 212 | 0.5182 | 0.5353 | 0.5182 | 0.7199 |
| No log | 0.6564 | 214 | 0.5272 | 0.5370 | 0.5272 | 0.7261 |
| No log | 0.6626 | 216 | 0.4869 | 0.5099 | 0.4869 | 0.6978 |
| No log | 0.6687 | 218 | 0.6139 | 0.5475 | 0.6139 | 0.7835 |
| No log | 0.6748 | 220 | 0.6738 | 0.5521 | 0.6738 | 0.8209 |
| No log | 0.6810 | 222 | 0.6334 | 0.5485 | 0.6334 | 0.7959 |
| No log | 0.6871 | 224 | 0.5798 | 0.5539 | 0.5798 | 0.7615 |
| No log | 0.6933 | 226 | 0.5371 | 0.5552 | 0.5371 | 0.7329 |
| No log | 0.6994 | 228 | 0.4993 | 0.5473 | 0.4993 | 0.7066 |
| No log | 0.7055 | 230 | 0.6712 | 0.5405 | 0.6712 | 0.8193 |
| No log | 0.7117 | 232 | 0.6595 | 0.5421 | 0.6595 | 0.8121 |
| No log | 0.7178 | 234 | 0.4617 | 0.5310 | 0.4617 | 0.6795 |
| No log | 0.7239 | 236 | 0.4914 | 0.4552 | 0.4914 | 0.7010 |
| No log | 0.7301 | 238 | 0.4736 | 0.4653 | 0.4736 | 0.6882 |
| No log | 0.7362 | 240 | 0.4680 | 0.5173 | 0.4680 | 0.6841 |
| No log | 0.7423 | 242 | 0.6012 | 0.5059 | 0.6012 | 0.7754 |
| No log | 0.7485 | 244 | 0.5771 | 0.5308 | 0.5771 | 0.7596 |
| No log | 0.7546 | 246 | 0.4608 | 0.5076 | 0.4608 | 0.6789 |
| No log | 0.7607 | 248 | 0.4826 | 0.4466 | 0.4826 | 0.6947 |
| No log | 0.7669 | 250 | 0.5302 | 0.4105 | 0.5302 | 0.7281 |
| No log | 0.7730 | 252 | 0.4906 | 0.4441 | 0.4906 | 0.7004 |
| No log | 0.7791 | 254 | 0.4667 | 0.5060 | 0.4667 | 0.6832 |
| No log | 0.7853 | 256 | 0.4662 | 0.5096 | 0.4662 | 0.6828 |
| No log | 0.7914 | 258 | 0.4598 | 0.5093 | 0.4598 | 0.6781 |
| No log | 0.7975 | 260 | 0.4636 | 0.5121 | 0.4636 | 0.6808 |
| No log | 0.8037 | 262 | 0.5031 | 0.5374 | 0.5031 | 0.7093 |
| No log | 0.8098 | 264 | 0.6510 | 0.5044 | 0.6510 | 0.8069 |
| No log | 0.8160 | 266 | 0.7434 | 0.4896 | 0.7434 | 0.8622 |
| No log | 0.8221 | 268 | 0.7149 | 0.5162 | 0.7149 | 0.8455 |
| No log | 0.8282 | 270 | 0.6602 | 0.5158 | 0.6602 | 0.8126 |
| No log | 0.8344 | 272 | 0.5151 | 0.5194 | 0.5151 | 0.7177 |
| No log | 0.8405 | 274 | 0.4677 | 0.5433 | 0.4677 | 0.6839 |
| No log | 0.8466 | 276 | 0.4877 | 0.5457 | 0.4877 | 0.6984 |
| No log | 0.8528 | 278 | 0.6147 | 0.5475 | 0.6147 | 0.7840 |
| No log | 0.8589 | 280 | 0.5566 | 0.5364 | 0.5566 | 0.7460 |
| No log | 0.8650 | 282 | 0.4337 | 0.5369 | 0.4337 | 0.6586 |
| No log | 0.8712 | 284 | 0.4282 | 0.4989 | 0.4282 | 0.6544 |
| No log | 0.8773 | 286 | 0.4241 | 0.5215 | 0.4241 | 0.6512 |
| No log | 0.8834 | 288 | 0.4278 | 0.5316 | 0.4278 | 0.6541 |
| No log | 0.8896 | 290 | 0.4208 | 0.5374 | 0.4208 | 0.6487 |
| No log | 0.8957 | 292 | 0.4123 | 0.5222 | 0.4123 | 0.6421 |
| No log | 0.9018 | 294 | 0.4486 | 0.5740 | 0.4486 | 0.6698 |
| No log | 0.9080 | 296 | 0.4498 | 0.5850 | 0.4498 | 0.6707 |
| No log | 0.9141 | 298 | 0.4043 | 0.5188 | 0.4043 | 0.6358 |
| No log | 0.9202 | 300 | 0.4122 | 0.5454 | 0.4122 | 0.6420 |
| No log | 0.9264 | 302 | 0.4565 | 0.5931 | 0.4565 | 0.6756 |
| No log | 0.9325 | 304 | 0.5121 | 0.5675 | 0.5121 | 0.7156 |
| No log | 0.9387 | 306 | 0.7061 | 0.5375 | 0.7061 | 0.8403 |
| No log | 0.9448 | 308 | 0.6642 | 0.5385 | 0.6642 | 0.8150 |
| No log | 0.9509 | 310 | 0.6004 | 0.5265 | 0.6004 | 0.7748 |
| No log | 0.9571 | 312 | 0.6738 | 0.5371 | 0.6738 | 0.8209 |
| No log | 0.9632 | 314 | 0.6313 | 0.5391 | 0.6313 | 0.7946 |
| No log | 0.9693 | 316 | 0.5623 | 0.5371 | 0.5623 | 0.7498 |
| No log | 0.9755 | 318 | 0.4838 | 0.5194 | 0.4838 | 0.6955 |
| No log | 0.9816 | 320 | 0.4584 | 0.4589 | 0.4584 | 0.6771 |
| No log | 0.9877 | 322 | 0.4560 | 0.4568 | 0.4560 | 0.6752 |
| No log | 0.9939 | 324 | 0.4703 | 0.5190 | 0.4703 | 0.6858 |
| No log | 1.0 | 326 | 0.4788 | 0.5582 | 0.4788 | 0.6919 |
| No log | 1.0061 | 328 | 0.4389 | 0.5394 | 0.4389 | 0.6625 |
| No log | 1.0123 | 330 | 0.4342 | 0.5565 | 0.4342 | 0.6589 |
| No log | 1.0184 | 332 | 0.4090 | 0.5306 | 0.4090 | 0.6395 |
| No log | 1.0245 | 334 | 0.4141 | 0.5722 | 0.4141 | 0.6435 |
| No log | 1.0307 | 336 | 0.4022 | 0.5461 | 0.4022 | 0.6342 |
| No log | 1.0368 | 338 | 0.4137 | 0.5738 | 0.4137 | 0.6432 |
| No log | 1.0429 | 340 | 0.4919 | 0.5997 | 0.4919 | 0.7013 |
| No log | 1.0491 | 342 | 0.4285 | 0.5867 | 0.4285 | 0.6546 |
| No log | 1.0552 | 344 | 0.4061 | 0.5463 | 0.4061 | 0.6372 |
| No log | 1.0613 | 346 | 0.4139 | 0.5946 | 0.4139 | 0.6434 |
| No log | 1.0675 | 348 | 0.4126 | 0.5903 | 0.4126 | 0.6423 |
| No log | 1.0736 | 350 | 0.4322 | 0.5872 | 0.4322 | 0.6574 |
| No log | 1.0798 | 352 | 0.4568 | 0.5973 | 0.4568 | 0.6759 |
| No log | 1.0859 | 354 | 0.5185 | 0.6089 | 0.5185 | 0.7200 |
| No log | 1.0920 | 356 | 0.5242 | 0.5950 | 0.5242 | 0.7240 |
| No log | 1.0982 | 358 | 0.6431 | 0.6062 | 0.6431 | 0.8020 |
| No log | 1.1043 | 360 | 0.6971 | 0.5829 | 0.6971 | 0.8349 |
| No log | 1.1104 | 362 | 0.6436 | 0.5850 | 0.6436 | 0.8022 |
| No log | 1.1166 | 364 | 0.5716 | 0.5751 | 0.5716 | 0.7561 |
| No log | 1.1227 | 366 | 0.6794 | 0.5789 | 0.6794 | 0.8243 |
| No log | 1.1288 | 368 | 0.6445 | 0.5728 | 0.6445 | 0.8028 |
| No log | 1.1350 | 370 | 0.4676 | 0.5295 | 0.4676 | 0.6838 |
| No log | 1.1411 | 372 | 0.4435 | 0.4720 | 0.4435 | 0.6659 |
| No log | 1.1472 | 374 | 0.4630 | 0.5376 | 0.4630 | 0.6804 |
| No log | 1.1534 | 376 | 0.5805 | 0.5967 | 0.5805 | 0.7619 |
| No log | 1.1595 | 378 | 0.5694 | 0.5949 | 0.5694 | 0.7546 |
| No log | 1.1656 | 380 | 0.4483 | 0.4928 | 0.4483 | 0.6696 |
| No log | 1.1718 | 382 | 0.4510 | 0.4237 | 0.4510 | 0.6716 |
| No log | 1.1779 | 384 | 0.4471 | 0.5120 | 0.4471 | 0.6686 |
| No log | 1.1840 | 386 | 0.4629 | 0.5658 | 0.4629 | 0.6804 |
| No log | 1.1902 | 388 | 0.4346 | 0.5575 | 0.4346 | 0.6592 |
| No log | 1.1963 | 390 | 0.4420 | 0.6185 | 0.4420 | 0.6649 |
| No log | 1.2025 | 392 | 0.4225 | 0.5757 | 0.4225 | 0.6500 |
| No log | 1.2086 | 394 | 0.4265 | 0.5723 | 0.4265 | 0.6531 |
| No log | 1.2147 | 396 | 0.4397 | 0.6207 | 0.4397 | 0.6631 |
| No log | 1.2209 | 398 | 0.5118 | 0.6550 | 0.5118 | 0.7154 |
| No log | 1.2270 | 400 | 0.4585 | 0.6424 | 0.4585 | 0.6771 |
| No log | 1.2331 | 402 | 0.4091 | 0.5702 | 0.4091 | 0.6396 |
| No log | 1.2393 | 404 | 0.4287 | 0.5988 | 0.4287 | 0.6548 |
| No log | 1.2454 | 406 | 0.6285 | 0.6505 | 0.6285 | 0.7928 |
| No log | 1.2515 | 408 | 0.6757 | 0.6677 | 0.6757 | 0.8220 |
| No log | 1.2577 | 410 | 0.4727 | 0.6340 | 0.4727 | 0.6875 |
| No log | 1.2638 | 412 | 0.4060 | 0.5803 | 0.4060 | 0.6372 |
| No log | 1.2699 | 414 | 0.4094 | 0.5365 | 0.4094 | 0.6399 |
| No log | 1.2761 | 416 | 0.4246 | 0.6140 | 0.4246 | 0.6516 |
| No log | 1.2822 | 418 | 0.5676 | 0.6155 | 0.5676 | 0.7534 |
| No log | 1.2883 | 420 | 0.5751 | 0.6099 | 0.5751 | 0.7583 |
| No log | 1.2945 | 422 | 0.5081 | 0.6192 | 0.5081 | 0.7128 |
| No log | 1.3006 | 424 | 0.5343 | 0.6185 | 0.5343 | 0.7310 |
| No log | 1.3067 | 426 | 0.4677 | 0.5958 | 0.4677 | 0.6839 |
| No log | 1.3129 | 428 | 0.4910 | 0.5990 | 0.4910 | 0.7007 |
| No log | 1.3190 | 430 | 0.5323 | 0.6255 | 0.5323 | 0.7296 |
| No log | 1.3252 | 432 | 0.4949 | 0.6374 | 0.4949 | 0.7035 |
| No log | 1.3313 | 434 | 0.4624 | 0.6227 | 0.4624 | 0.6800 |
| No log | 1.3374 | 436 | 0.4172 | 0.5823 | 0.4172 | 0.6459 |
| No log | 1.3436 | 438 | 0.4186 | 0.5786 | 0.4186 | 0.6470 |
| No log | 1.3497 | 440 | 0.5039 | 0.6432 | 0.5039 | 0.7098 |
| No log | 1.3558 | 442 | 0.8884 | 0.6580 | 0.8884 | 0.9425 |
| No log | 1.3620 | 444 | 0.9940 | 0.6472 | 0.9940 | 0.9970 |
| No log | 1.3681 | 446 | 0.6971 | 0.6822 | 0.6971 | 0.8349 |
| No log | 1.3742 | 448 | 0.4205 | 0.5902 | 0.4205 | 0.6485 |
| No log | 1.3804 | 450 | 0.4431 | 0.4995 | 0.4431 | 0.6656 |
| No log | 1.3865 | 452 | 0.4209 | 0.5535 | 0.4209 | 0.6487 |
| No log | 1.3926 | 454 | 0.5001 | 0.6088 | 0.5001 | 0.7072 |
| No log | 1.3988 | 456 | 0.6705 | 0.6463 | 0.6705 | 0.8188 |
| No log | 1.4049 | 458 | 0.6373 | 0.6012 | 0.6373 | 0.7983 |
| No log | 1.4110 | 460 | 0.5216 | 0.5925 | 0.5216 | 0.7222 |
| No log | 1.4172 | 462 | 0.4935 | 0.5747 | 0.4935 | 0.7025 |
| No log | 1.4233 | 464 | 0.4859 | 0.5950 | 0.4859 | 0.6971 |
| No log | 1.4294 | 466 | 0.5659 | 0.6203 | 0.5659 | 0.7522 |
| No log | 1.4356 | 468 | 0.6040 | 0.6563 | 0.6040 | 0.7772 |
| No log | 1.4417 | 470 | 0.5111 | 0.6375 | 0.5111 | 0.7149 |
| No log | 1.4479 | 472 | 0.4950 | 0.6371 | 0.4950 | 0.7036 |
| No log | 1.4540 | 474 | 0.4908 | 0.6300 | 0.4908 | 0.7006 |
| No log | 1.4601 | 476 | 0.5201 | 0.6393 | 0.5201 | 0.7212 |
| No log | 1.4663 | 478 | 0.5426 | 0.6439 | 0.5426 | 0.7366 |
| No log | 1.4724 | 480 | 0.5161 | 0.6164 | 0.5161 | 0.7184 |
| No log | 1.4785 | 482 | 0.4675 | 0.5829 | 0.4675 | 0.6838 |
| No log | 1.4847 | 484 | 0.4574 | 0.5240 | 0.4574 | 0.6763 |
| No log | 1.4908 | 486 | 0.4661 | 0.5330 | 0.4661 | 0.6827 |
| No log | 1.4969 | 488 | 0.5480 | 0.5765 | 0.5480 | 0.7403 |
| No log | 1.5031 | 490 | 0.6625 | 0.5809 | 0.6625 | 0.8139 |
| No log | 1.5092 | 492 | 0.5748 | 0.5736 | 0.5748 | 0.7582 |
| No log | 1.5153 | 494 | 0.5874 | 0.5853 | 0.5874 | 0.7664 |
| No log | 1.5215 | 496 | 0.6129 | 0.5977 | 0.6129 | 0.7829 |
| No log | 1.5276 | 498 | 0.7475 | 0.6388 | 0.7475 | 0.8646 |
| 0.4923 | 1.5337 | 500 | 0.7693 | 0.6393 | 0.7693 | 0.8771 |
| 0.4923 | 1.5399 | 502 | 0.5486 | 0.5936 | 0.5486 | 0.7407 |
| 0.4923 | 1.5460 | 504 | 0.4410 | 0.5276 | 0.4410 | 0.6641 |
| 0.4923 | 1.5521 | 506 | 0.4372 | 0.5348 | 0.4372 | 0.6612 |
| 0.4923 | 1.5583 | 508 | 0.5006 | 0.6006 | 0.5006 | 0.7076 |
| 0.4923 | 1.5644 | 510 | 0.7092 | 0.6592 | 0.7092 | 0.8422 |
| 0.4923 | 1.5706 | 512 | 0.6580 | 0.6658 | 0.6580 | 0.8112 |
| 0.4923 | 1.5767 | 514 | 0.5604 | 0.6525 | 0.5604 | 0.7486 |
| 0.4923 | 1.5828 | 516 | 0.5292 | 0.6223 | 0.5292 | 0.7275 |
| 0.4923 | 1.5890 | 518 | 0.5579 | 0.6331 | 0.5579 | 0.7470 |
| 0.4923 | 1.5951 | 520 | 0.6646 | 0.6609 | 0.6646 | 0.8152 |
| 0.4923 | 1.6012 | 522 | 0.7725 | 0.6635 | 0.7725 | 0.8789 |
| 0.4923 | 1.6074 | 524 | 0.6209 | 0.6651 | 0.6209 | 0.7880 |
| 0.4923 | 1.6135 | 526 | 0.4821 | 0.6252 | 0.4821 | 0.6943 |
| 0.4923 | 1.6196 | 528 | 0.5472 | 0.6461 | 0.5472 | 0.7397 |
| 0.4923 | 1.6258 | 530 | 0.6800 | 0.6716 | 0.6800 | 0.8246 |
| 0.4923 | 1.6319 | 532 | 0.8323 | 0.6842 | 0.8323 | 0.9123 |
| 0.4923 | 1.6380 | 534 | 0.6719 | 0.6626 | 0.6719 | 0.8197 |
| 0.4923 | 1.6442 | 536 | 0.5259 | 0.6312 | 0.5259 | 0.7252 |
| 0.4923 | 1.6503 | 538 | 0.4493 | 0.6048 | 0.4493 | 0.6703 |
| 0.4923 | 1.6564 | 540 | 0.4517 | 0.6152 | 0.4517 | 0.6721 |
| 0.4923 | 1.6626 | 542 | 0.4835 | 0.6485 | 0.4835 | 0.6953 |
| 0.4923 | 1.6687 | 544 | 0.4473 | 0.6079 | 0.4473 | 0.6688 |
| 0.4923 | 1.6748 | 546 | 0.4911 | 0.4801 | 0.4911 | 0.7008 |
| 0.4923 | 1.6810 | 548 | 0.5131 | 0.4626 | 0.5131 | 0.7163 |
| 0.4923 | 1.6871 | 550 | 0.4360 | 0.5372 | 0.4360 | 0.6603 |
| 0.4923 | 1.6933 | 552 | 0.5477 | 0.6224 | 0.5477 | 0.7400 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
adrianoL/distilbert-pt-cased-redacao-nota-modelo | adrianoL | 2024-11-06T15:02:16Z | 72 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:Geotrend/distilbert-base-pt-cased",
"base_model:finetune:Geotrend/distilbert-base-pt-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-06T15:01:19Z | ---
library_name: transformers
license: apache-2.0
base_model: Geotrend/distilbert-base-pt-cased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-pt-cased-redacao-nota-modelo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-pt-cased-redacao-nota-modelo
This model is a fine-tuned version of [Geotrend/distilbert-base-pt-cased](https://huggingface.co/Geotrend/distilbert-base-pt-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 456, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mav23/SmolLM2-1.7B-Instruct-GGUF | mav23 | 2024-11-06T14:59:23Z | 87 | 0 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-06T14:45:20Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# SmolLM2

## Table of Contents
1. [Model Summary](#model-summary)
2. [Evaluation](#evaluation)
3. [Examples](#examples)
4. [Limitations](#limitations)
5. [Training](#training)
6. [License](#license)
7. [Citation](#citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B |
|------------------|--------------|-------------|---------------|--------------|
| HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 |
| ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 |
| PIQA | **77.6** | 74.8 | 76.1 | 76.0 |
| MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 |
| CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 |
| TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 |
| Winogrande | **59.4** | 57.8 | 59.3 | 54.7 |
| OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** |
| GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 |
## Instruction Model
| Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:|
| IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 |
| MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 |
| OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN |
| HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 |
| ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 |
| PIQA | **74.4** | 72.3 | 73.2 | 71.6 |
| MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 |
| BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 |
| GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 |
## Examples
Below are some system and instruct prompts that work well for special tasks
### Text rewriting
```python
system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!"}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
```
Hey there! I noticed that the CI isn't passing after your latest commit. Could you take a look and let me know what's going on? Thanks so much for your help!
```
### Summarization
```python
system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns."
messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content": INSERT_LONG_EMAIL]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Function calling
SmolLM2-1.7B-Instruct can handle function calling, it scores 27% on the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html). Here's how you can leverage it:
```python
import json
import re
from typing import Optional
from jinja2 import Template
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.utils import get_json_schema
system_prompt = Template("""You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the functions can be used, point it out and refuse to answer.
If the given question lacks the parameters required by the function, also point it out.
You have access to the following tools:
<tools>{{ tools }}</tools>
The output MUST strictly adhere to the following format, and NO other text MUST be included.
The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make the tool calls an empty list '[]'.
<tool_call>[
{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
... (more tool calls as required)
]</tool_call>""")
def prepare_messages(
query: str,
tools: Optional[dict[str, any]] = None,
history: Optional[list[dict[str, str]]] = None
) -> list[dict[str, str]]:
"""Prepare the system and user messages for the given query and tools.
Args:
query: The query to be answered.
tools: The tools available to the user. Defaults to None, in which case if a
list without content will be passed to the model.
history: Exchange of messages, including the system_prompt from
the first query. Defaults to None, the first message in a conversation.
"""
if tools is None:
tools = []
if history:
messages = history.copy()
messages.append({"role": "user", "content": query})
else:
messages = [
{"role": "system", "content": system_prompt.render(tools=json.dumps(tools))},
{"role": "user", "content": query}
]
return messages
def parse_response(text: str) -> str | dict[str, any]:
"""Parses a response from the model, returning either the
parsed list with the tool calls parsed, or the
model thought or response if couldn't generate one.
Args:
text: Response from the model.
"""
pattern = r"<tool_call>(.*?)</tool_call>"
matches = re.findall(pattern, text, re.DOTALL)
if matches:
return json.loads(matches[0])
return text
model_name_smollm = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name_smollm, device_map="auto", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_smollm)
from datetime import datetime
import random
def get_current_time() -> str:
"""Returns the current time in 24-hour format.
Returns:
str: Current time in HH:MM:SS format.
"""
return datetime.now().strftime("%H:%M:%S")
def get_random_number_between(min: int, max: int) -> int:
"""
Gets a random number between min and max.
Args:
min: The minimum number.
max: The maximum number.
Returns:
A random number between min and max.
"""
return random.randint(min, max)
tools = [get_json_schema(get_random_number_between), get_json_schema(get_current_time)]
toolbox = {"get_random_number_between": get_random_number_between, "get_current_time": get_current_time}
query = "Give me a number between 1 and 300"
messages = prepare_messages(query, tools=tools)
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
tool_calls = parse_response(result)
# [{'name': 'get_random_number_between', 'arguments': {'min': 1, 'max': 300}}
# Get tool responses
tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls]
# [63]
# For the second turn, rebuild the history of messages:
history = messages.copy()
# Add the "parsed response"
history.append({"role": "assistant", "content": result})
query = "Can you give me the hour?"
history.append({"role": "user", "content": query})
inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
tool_calls = parse_response(result)
tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls]
# ['07:57:25']
```
More details such as parallel function calls and tools not available can be found [here](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct/blob/main/instructions_function_calling.md)
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 256 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
- **Alignement Handbook** [alignement-handbook](https://github.com/huggingface/alignment-handbook/)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
``` |
featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF | featherless-ai-quants | 2024-11-06T14:53:51Z | 10 | 0 | null | [
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-06T12:59:39Z | ---
base_model: princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-GGUF/blob/main/princeton-nlp-Llama-3-Instruct-8B-SLiC-HF-v0.2-Q8_0.gguf) | 8145.11 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
ZyloO-AI/RawCharm-Amateur-Photography | ZyloO-AI | 2024-11-06T14:53:47Z | 40 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-06T14:49:25Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
novalalthoff/wav2vec2-large-id-16hr-non-lp | novalalthoff | 2024-11-06T14:51:21Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-06T14:49:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FUTO-NIGERIA/airad | FUTO-NIGERIA | 2024-11-06T14:47:22Z | 9 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"image-classification",
"en",
"dataset:hf-vision/chest-xray-pneumonia",
"base_model:google/efficientnet-b0",
"base_model:finetune:google/efficientnet-b0",
"region:us"
] | image-classification | 2024-11-06T13:39:59Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
datasets:
- hf-vision/chest-xray-pneumonia
language:
- en
base_model:
- google/efficientnet-b0
pipeline_tag: image-classification
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Xu-Ouyang/pythia-6.9b-deduped-int8-step2-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-06T14:46:28Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-06T14:36:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZyloO-AI/Zyntoon-Semi-Realistic-Pony | ZyloO-AI | 2024-11-06T14:32:32Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-06T14:25:35Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZyloO-AI/Volendir-Pony-Cinematic | ZyloO-AI | 2024-11-06T14:27:15Z | 38 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-06T13:14:59Z | ---
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aigchacker/Text-Poster | aigchacker | 2024-11-06T14:26:55Z | 42 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"image-generation",
"flux",
"safetensors",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-06T13:59:31Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- image-generation
- flux
- safetensors
widget:
- text: Text poster, a couple
output:
url: images/6dd1a918d89991ad5e40513ab88e7d892077f89dac93edcf4b660dd2.jpg
- text: Text poster, a woman sitting in a cafe
output:
url: images/d2586464001008a80b5e45104e0f23290a35db048cab2e4fc4bfa356.jpg
- text: Text poster, eiffel tower
output:
url: images/f25e24ecfbd0aa96fb6f55ab29288ba4d1fffe79fd95679d9d2f1329.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: text poster
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# FLUX.1-dev-LoRA-Text-Poster
This is a LoRA (Text Poster) trained on FLUX.1-dev for artistic text poster by [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). If you are also interested in sharing your models on our platform, welcome to join our [Discord Community](https://huggingface.co/spaces/Shakker-Labs/README/blob/main/(https://discord.gg/5TuxSjJya6)).
<div class="container">
<img src="./poster.jpeg" width="1024"/>
</div>
## Showcases
<Gallery />
## Trigger words
You should use `text poster` to trigger the image generation. The recommended scale is `0.8` to `1.0` in diffusers.
## Online Inference
You can also download this model at [Shakker AI](https://www.shakker.ai/modelinfo/579ab130b53246fea49811bf80d38486/FLUX-text-poster?from=search), where we provide an online interface to generate images.
## Acknowledgements
This model is trained by our copyrighted users [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
mradermacher/tamil-llama-13b-base-v0.1-GGUF | mradermacher | 2024-11-06T14:18:54Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"ta",
"en",
"base_model:abhinand/tamil-llama-13b-base-v0.1",
"base_model:quantized:abhinand/tamil-llama-13b-base-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-06T12:45:03Z | ---
base_model: abhinand/tamil-llama-13b-base-v0.1
language:
- ta
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tamil-llama-13b-base-v0.1-GGUF/resolve/main/tamil-llama-13b-base-v0.1.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GateNLP/covid-vaccine-twitter-bert | GateNLP | 2024-11-06T14:18:18Z | 117 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-01-13T19:02:52Z | VaxxHesitancy: A Dataset for Studying Hesitancy Towards COVID-19 Vaccination on Twitter
Yida Mu, Mali Jin, Charlie Grimshaw, Carolina Scarton, Kalina Bontcheva, Xingyi Song
Accepted @ICWSM 2023
```bibtex
@inproceedings{mu2023vaxxhesitancy,
title={VaxxHesitancy: A Dataset for Studying Hesitancy Towards COVID-19 Vaccination on Twitter},
author={Mu, Yida and Jin, Mali and Grimshaw, Charlie and Scarton, Carolina and Bontcheva, Kalina and Song, Xingyi},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={17},
pages={1052--1062},
year={2023}
}
```
---
license: mit
---
|
AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs | AlekseyKorshuk | 2024-11-06T14:15:56Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-06T10:34:06Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets: AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft
library_name: transformers
model_name: ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft](https://huggingface.co/datasets/AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AlekseyKorshuk/ai-detection-gutenberg-human-v2-formatted-ai-sft-qwen-7b-sft-3epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aleksey-korshuk/huggingface/runs/bfyzbjtg)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.4.1+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits