modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
isspek/roberta-base_ebola_mistral_2_2e-5_16_undersampling_0.3 | isspek | "2024-12-01T14:53:38Z" | 196 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-17T10:57:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/roberta-base_covid_chatgpt_4_2e-5_16_undersampling_0.4 | isspek | "2024-12-29T09:57:25Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-29T09:55:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ssale2/betting_spam_detection_model_roberta | ssale2 | "2025-03-15T16:59:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-15T16:59:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/mistral-7b-baseline | Yuhan123 | "2025-03-13T22:16:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T22:13:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vladimirshebuniayeu/bert-base-cased-rm-se-100000steps-lora3 | vladimirshebuniayeu | "2025-03-21T12:25:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T12:25:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sharan1712/llama3_8B_hhrlhf_qlora_4bit_1a | Sharan1712 | "2024-07-29T23:34:49Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-29T23:31:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/argilla.CapybaraHermes-2.5-Mistral-7B-GGUF | DevQuasar | "2025-03-17T10:48:10Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:quantized:argilla/CapybaraHermes-2.5-Mistral-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-17T10:23:39Z" | ---
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Arlolo0/UniScene | Arlolo0 | "2025-03-17T06:54:51Z" | 0 | 0 | null | [
"arxiv:2412.05435",
"region:us"
] | null | "2025-03-16T05:17:26Z" | ## UniScene: Unified Occupancy-centric Driving Scene Generation [CVPR 2025]
[](https://arxiv.org/abs/2412.05435)
[](https://arlo0o.github.io/uniscene/)
[](./assets/UniScene-arxiv.pdf)
### Abstract:
<details>
<summary><b>TL; DR</b> The first unified framework for generating three key data forms — semantic occupancy, video, and LiDAR — in driving scenes. </summary>
Generating high-fidelity, controllable, and annotated training data is critical for autonomous driving. Existing methods typically generate a single data form directly from a coarse scene layout, which not only fails to output rich data forms required for diverse downstream tasks but also struggles to model the direct layout-to-data distribution. In this paper, we introduce UniScene, the first unified framework for generating three key data forms — semantic occupancy, video, and LiDAR — in driving scenes. UniScene employs a progressive generation process that decomposes the complex task of scene generation into two hierarchical steps: (a) first generating semantic occupancy from a customized scene layout as a meta scene representation rich in both semantic and geometric information, and then (b) conditioned on occupancy, generating video and LiDAR data, respectively, with two novel transfer strategies of Gaussian-based Joint Rendering and Prior-guided Sparse Modeling. This occupancy-centric approach reduces the generation burden, especially for intricate scenes, while providing detailed intermediate representations for the subsequent generation stages. Extensive experiments demonstrate that UniScene outperforms previous SOTAs in the occupancy, video, and LiDAR generation, which also indeed benefits downstream driving tasks.
</details> |
barc0/Llama-3.1-ARC-Heavy-Induction-8B | barc0 | "2024-11-02T15:27:50Z" | 166 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-27T03:43:38Z" | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: l3.1-8b-inst-fft-induction-barc-heavy-200k-lr1e-5-ep2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# l3.1-8b-inst-fft-induction-barc-heavy-200k-lr1e-5-ep2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2765
## prompt example
We follow Llama-3.1 instruct template.
For example, the ARC public evaluation problem 62ab2642 is converted to
```
[{"role": "system", "content": "You are a world-class puzzle solver with exceptional pattern recognition skills and expertise in Python programming. Your task is to analyze puzzles and provide Python solutions."},
{"role": "user", "content": "Given input-output grid pairs as reference examples, carefully observe the patterns to predict the output grid for new test input. Each pair follows the same transformation rule. Grids are 2D arrays represented as strings, with cells (colors) separated by spaces and rows by newlines.\nHere are the input and output grids for the reference examples:\nExample 1\nInput:\nGray Black Black Gray Black\nGray Black Black Gray Black\nGray Black Gray Gray Gray\nGray Gray Gray Black Black\nBlack Black Gray Black Black\nBlack Black Gray Gray Gray\nBlack Black Black Gray Black\nGray Gray Gray Gray Black\nBlack Gray Black Black Black\nBlack Gray Black Black Black\nBlack Gray Gray Gray Black\nBlack Black Black Gray Black\nBlack Gray Gray Gray Gray\nGray Gray Black Black Black\nBlack Gray Black Black Black\n\nOutput:\nGray Black Black Gray Black\nGray Black Black Gray Black\nGray Black Gray Gray Gray\nGray Gray Gray Black Black\nBlack Black Gray Black Black\nBlack Black Gray Gray Gray\nBlack Black Black Gray Purple\nGray Gray Gray Gray Purple\nBlack Gray Purple Purple Purple\nBlack Gray Purple Purple Purple\nBlack Gray Gray Gray Purple\nBlack Black Black Gray Purple\nBlack Gray Gray Gray Gray\nGray Gray Black Black Black\nOrange Gray Black Black Black\n\n\nExample 2\nInput:\nBlack Black Gray Black Black Gray Black Black Black\nBlack Black Gray Gray Gray Gray Black Black Black\nGray Gray Gray Black Black Black Black Black Black\nBlack Gray Black Black Black Black Black Black Black\nBlack Gray Black Black Black Gray Gray Gray Gray\nBlack Gray Gray Gray Gray Gray Black Black Black\nGray Gray Black Black Black Gray Gray Gray Gray\nBlack Black Black Black Black Gray Black Black Black\nGray Gray Gray Gray Gray Gray Black Black Black\nBlack Black Black Black Black Gray Black Black Black\n\nOutput:\nBlack Black Gray Orange Orange Gray Purple Purple Purple\nBlack Black Gray Gray Gray Gray Purple Purple Purple\nGray Gray Gray Purple Purple Purple Purple Purple Purple\nBlack Gray Purple Purple Purple Purple Purple Purple Purple\nBlack Gray Purple Purple Purple Gray Gray Gray Gray\nBlack Gray Gray Gray Gray Gray Black Black Black\nGray Gray Black Black Black Gray Gray Gray Gray\nBlack Black Black Black Black Gray Black Black Black\nGray Gray Gray Gray Gray Gray Black Black Black\nBlack Black Black Black Black Gray Black Black Black\n\n\nExample 3\nInput:\nBlack Gray Black Black Gray Black Black Black Black Gray Black Black\nBlack Gray Black Black Gray Gray Gray Black Black Gray Black Black\nBlack Gray Gray Gray Gray Black Gray Black Black Gray Black Black\nBlack Black Gray Black Black Black Gray Gray Gray Gray Black Black\nGray Gray Gray Black Black Black Gray Black Black Gray Gray Gray\nBlack Black Black Black Black Black Gray Black Black Black Black Black\nBlack Black Black Gray Gray Gray Gray Black Black Black Black Black\nGray Gray Gray Gray Black Black Gray Black Black Black Black Black\nBlack Black Black Gray Black Black Gray Gray Gray Black Black Black\nBlack Black Black Gray Black Black Black Black Gray Black Black Black\n\nOutput:\nBlack Gray Orange Orange Gray Black Black Black Black Gray Black Black\nBlack Gray Orange Orange Gray Gray Gray Black Black Gray Black Black\nBlack Gray Gray Gray Gray Black Gray Black Black Gray Black Black\nBlack Black Gray Black Black Black Gray Gray Gray Gray Black Black\nGray Gray Gray Black Black Black Gray Purple Purple Gray Gray Gray\nBlack Black Black Black Black Black Gray Purple Purple Purple Purple Purple\nBlack Black Black Gray Gray Gray Gray Purple Purple Purple Purple Purple\nGray Gray Gray Gray Black Black Gray Purple Purple Purple Purple Purple\nBlack Black Black Gray Black Black Gray Gray Gray Purple Purple Purple\nBlack Black Black Gray Black Black Black Black Gray Purple Purple Purple\n\n\nHere is the input grid for the test example:\nInput:\nBlack Gray Black Black Black Black Black Gray Black Black Gray Black\nBlack Gray Black Black Black Gray Gray Gray Black Gray Gray Black\nGray Gray Gray Black Black Gray Black Gray Gray Gray Black Black\nBlack Black Gray Gray Gray Gray Black Gray Black Gray Gray Black\nBlack Black Black Gray Black Black Black Gray Black Black Gray Black\n\nWrite a Python function `transform` that can convert any given input grid to its corresponding output grid based on the pattern observed in the reference examples."}
]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2944 | 1.0 | 1478 | 0.2865 |
| 0.2388 | 2.0 | 2956 | 0.2765 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu124
- Datasets 3.0.2
- Tokenizers 0.19.1
|
xaviergillard/parti-pris-v2-f32 | xaviergillard | "2024-10-14T21:11:06Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"pretraining",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-10-13T06:00:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beyoru/SQL13_3 | beyoru | "2025-03-13T17:52:41Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T17:50:17Z" | ---
base_model: unsloth/Qwen2.5-Coder-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** beyoru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AiMavenAi/AiMaven-Prometheus | AiMavenAi | "2024-06-28T00:51:28Z" | 54 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jefferylovely/SuperThetaMaven",
"flemmingmiguel/MBX-7B-v3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-02T06:20:51Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- jefferylovely/SuperThetaMaven
- flemmingmiguel/MBX-7B-v3
model-index:
- name: AiMaven-Prometheus
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.98
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.83
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.17
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.22
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.16
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.07
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AiMavenAi/AiMaven-Prometheus
name: Open LLM Leaderboard
---
[image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63d2fd4fb734eaa4d4f83928/1QsX5xh9WZRpArL-8ut6N.jpeg)
# jefferylovely/AiMaven-Prometheus
jefferylovely/AiMaven-Prometheus is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jefferylovely/SuperThetaMaven](https://huggingface.co/jefferylovely/SuperThetaMaven)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jefferylovely/SuperThetaMaven
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
merge_method: slerp
base_model: flemmingmiguel/MBX-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jefferylovely/jefferylovely/AiMaven-Prometheus"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AiMavenAi__AiMaven-Prometheus)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.74|
|AI2 Reasoning Challenge (25-Shot)|73.98|
|HellaSwag (10-Shot) |88.83|
|MMLU (5-Shot) |65.17|
|TruthfulQA (0-shot) |72.22|
|Winogrande (5-shot) |85.16|
|GSM8k (5-shot) |69.07| |
isspek/xlnet-base-cased_monkeypox_top3_1_2e-5_16_undersampling_0.4 | isspek | "2025-03-23T10:57:09Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-28T17:35:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/nerugm-base-4 | apwic | "2024-06-03T18:10:30Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-05-27T02:25:57Z" | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerugm-base-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerugm-base-4
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3325
- Location Precision: 0.7922
- Location Recall: 0.8356
- Location F1: 0.8133
- Location Number: 73
- Organization Precision: 0.7013
- Organization Recall: 0.8308
- Organization F1: 0.7606
- Organization Number: 65
- Person Precision: 0.9226
- Person Recall: 0.9533
- Person F1: 0.9377
- Person Number: 150
- Quantity Precision: 0.6667
- Quantity Recall: 0.7586
- Quantity F1: 0.7097
- Quantity Number: 29
- Time Precision: 0.8378
- Time Recall: 0.9118
- Time F1: 0.8732
- Time Number: 34
- Overall Precision: 0.8206
- Overall Recall: 0.8860
- Overall F1: 0.8521
- Overall Accuracy: 0.9670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Quantity Precision | Quantity Recall | Quantity F1 | Quantity Number | Time Precision | Time Recall | Time F1 | Time Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3326 | 1.0 | 106 | 0.1188 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.6184 | 0.7231 | 0.6667 | 65 | 0.8662 | 0.9067 | 0.8860 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.9667 | 0.8529 | 0.9062 | 34 | 0.7936 | 0.8433 | 0.8177 | 0.9621 |
| 0.1192 | 2.0 | 212 | 0.1727 | 0.6702 | 0.8630 | 0.7545 | 73 | 0.4597 | 0.8769 | 0.6032 | 65 | 0.8688 | 0.9267 | 0.8968 | 150 | 0.6098 | 0.8621 | 0.7143 | 29 | 0.6905 | 0.8529 | 0.7632 | 34 | 0.6790 | 0.8917 | 0.7709 | 0.9397 |
| 0.0772 | 3.0 | 318 | 0.1291 | 0.7356 | 0.8767 | 0.8 | 73 | 0.6883 | 0.8154 | 0.7465 | 65 | 0.8688 | 0.9267 | 0.8968 | 150 | 0.7742 | 0.8276 | 0.8000 | 29 | 0.8529 | 0.8529 | 0.8529 | 34 | 0.7943 | 0.8803 | 0.8351 | 0.9626 |
| 0.051 | 4.0 | 424 | 0.1436 | 0.7561 | 0.8493 | 0.8000 | 73 | 0.6022 | 0.8615 | 0.7089 | 65 | 0.8797 | 0.9267 | 0.9026 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7734 | 0.8946 | 0.8296 | 0.9589 |
| 0.0341 | 5.0 | 530 | 0.1558 | 0.7564 | 0.8082 | 0.7815 | 73 | 0.6962 | 0.8462 | 0.7639 | 65 | 0.8903 | 0.92 | 0.9049 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.7143 | 0.8824 | 0.7895 | 34 | 0.7964 | 0.8803 | 0.8363 | 0.9651 |
| 0.0289 | 6.0 | 636 | 0.1820 | 0.7619 | 0.8767 | 0.8153 | 73 | 0.625 | 0.7692 | 0.6897 | 65 | 0.8415 | 0.92 | 0.8790 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.7744 | 0.8803 | 0.824 | 0.9606 |
| 0.0223 | 7.0 | 742 | 0.1874 | 0.7683 | 0.8630 | 0.8129 | 73 | 0.6486 | 0.7385 | 0.6906 | 65 | 0.9026 | 0.9267 | 0.9145 | 150 | 0.7742 | 0.8276 | 0.8000 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.7995 | 0.8632 | 0.8301 | 0.9621 |
| 0.0154 | 8.0 | 848 | 0.2203 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.6353 | 0.8308 | 0.7200 | 65 | 0.8846 | 0.92 | 0.9020 | 150 | 0.75 | 0.8276 | 0.7869 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.7984 | 0.8803 | 0.8374 | 0.9623 |
| 0.0122 | 9.0 | 954 | 0.2047 | 0.8101 | 0.8767 | 0.8421 | 73 | 0.7368 | 0.8615 | 0.7943 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.7742 | 0.8276 | 0.8000 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.8220 | 0.8946 | 0.8568 | 0.9658 |
| 0.0125 | 10.0 | 1060 | 0.2343 | 0.8493 | 0.8493 | 0.8493 | 73 | 0.725 | 0.8923 | 0.8 | 65 | 0.9097 | 0.94 | 0.9246 | 150 | 0.6842 | 0.8966 | 0.7761 | 29 | 0.6744 | 0.8529 | 0.7532 | 34 | 0.8123 | 0.9003 | 0.8541 | 0.9619 |
| 0.0086 | 11.0 | 1166 | 0.3140 | 0.6421 | 0.8356 | 0.7262 | 73 | 0.6512 | 0.8615 | 0.7417 | 65 | 0.8765 | 0.9467 | 0.9103 | 150 | 0.6757 | 0.8621 | 0.7576 | 29 | 0.6591 | 0.8529 | 0.7436 | 34 | 0.7382 | 0.8917 | 0.8077 | 0.9520 |
| 0.0069 | 12.0 | 1272 | 0.2598 | 0.8592 | 0.8356 | 0.8472 | 73 | 0.7108 | 0.9077 | 0.7973 | 65 | 0.8642 | 0.9333 | 0.8974 | 150 | 0.6842 | 0.8966 | 0.7761 | 29 | 0.6522 | 0.8824 | 0.75 | 34 | 0.79 | 0.9003 | 0.8415 | 0.9619 |
| 0.0069 | 13.0 | 1378 | 0.2524 | 0.7595 | 0.8219 | 0.7895 | 73 | 0.7123 | 0.8 | 0.7536 | 65 | 0.8981 | 0.94 | 0.9186 | 150 | 0.7879 | 0.8966 | 0.8387 | 29 | 0.6905 | 0.8529 | 0.7632 | 34 | 0.8021 | 0.8775 | 0.8381 | 0.9623 |
| 0.0048 | 14.0 | 1484 | 0.2733 | 0.7294 | 0.8493 | 0.7848 | 73 | 0.7368 | 0.8615 | 0.7943 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.775 | 0.9118 | 0.8378 | 34 | 0.8072 | 0.8946 | 0.8486 | 0.9633 |
| 0.0069 | 15.0 | 1590 | 0.2588 | 0.7875 | 0.8630 | 0.8235 | 73 | 0.6914 | 0.8615 | 0.7671 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.75 | 0.8276 | 0.7869 | 29 | 0.8649 | 0.9412 | 0.9014 | 34 | 0.8212 | 0.9031 | 0.8602 | 0.9665 |
| 0.0047 | 16.0 | 1696 | 0.2755 | 0.7143 | 0.8219 | 0.7643 | 73 | 0.6875 | 0.8462 | 0.7586 | 65 | 0.9091 | 0.9333 | 0.9211 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.9412 | 0.9412 | 0.9412 | 34 | 0.8104 | 0.8889 | 0.8478 | 0.9638 |
| 0.0049 | 17.0 | 1802 | 0.2742 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.7083 | 0.7846 | 0.7445 | 65 | 0.8758 | 0.94 | 0.9068 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7209 | 0.9118 | 0.8052 | 34 | 0.7990 | 0.8832 | 0.8390 | 0.9646 |
| 0.0049 | 18.0 | 1908 | 0.2764 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7671 | 0.8615 | 0.8116 | 65 | 0.8938 | 0.9533 | 0.9226 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.8212 | 0.9031 | 0.8602 | 0.9653 |
| 0.0033 | 19.0 | 2014 | 0.2768 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7606 | 0.8308 | 0.7941 | 65 | 0.8812 | 0.94 | 0.9097 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.7949 | 0.9118 | 0.8493 | 34 | 0.8241 | 0.8946 | 0.8579 | 0.9670 |
| 0.0042 | 20.0 | 2120 | 0.3033 | 0.7241 | 0.8630 | 0.7875 | 73 | 0.7324 | 0.8 | 0.7647 | 65 | 0.9038 | 0.94 | 0.9216 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.7045 | 0.9118 | 0.7949 | 34 | 0.7985 | 0.8917 | 0.8425 | 0.9633 |
| 0.0036 | 21.0 | 2226 | 0.2692 | 0.8133 | 0.8356 | 0.8243 | 73 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.8205 | 0.9412 | 0.8767 | 34 | 0.8338 | 0.9003 | 0.8658 | 0.9655 |
| 0.0073 | 22.0 | 2332 | 0.3261 | 0.6923 | 0.8630 | 0.7683 | 73 | 0.6429 | 0.8308 | 0.7248 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.8065 | 0.8621 | 0.8333 | 29 | 0.7381 | 0.9118 | 0.8158 | 34 | 0.7715 | 0.8946 | 0.8285 | 0.9574 |
| 0.0024 | 23.0 | 2438 | 0.2863 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.7125 | 0.8769 | 0.7862 | 65 | 0.8968 | 0.9267 | 0.9115 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7805 | 0.9412 | 0.8533 | 34 | 0.8098 | 0.8974 | 0.8514 | 0.9646 |
| 0.0031 | 24.0 | 2544 | 0.3045 | 0.7590 | 0.8630 | 0.8077 | 73 | 0.6591 | 0.8923 | 0.7582 | 65 | 0.8994 | 0.9533 | 0.9256 | 150 | 0.7941 | 0.9310 | 0.8571 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7965 | 0.9145 | 0.8515 | 0.9631 |
| 0.0033 | 25.0 | 2650 | 0.3293 | 0.7326 | 0.8630 | 0.7925 | 73 | 0.6974 | 0.8154 | 0.7518 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.7561 | 0.9118 | 0.8267 | 34 | 0.7904 | 0.8917 | 0.8380 | 0.9609 |
| 0.0029 | 26.0 | 2756 | 0.2977 | 0.8025 | 0.8904 | 0.8442 | 73 | 0.6706 | 0.8769 | 0.76 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7805 | 0.9412 | 0.8533 | 34 | 0.8086 | 0.9145 | 0.8583 | 0.9636 |
| 0.0035 | 27.0 | 2862 | 0.3316 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6517 | 0.8923 | 0.7532 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.7805 | 0.9412 | 0.8533 | 34 | 0.8076 | 0.9088 | 0.8552 | 0.9631 |
| 0.0076 | 28.0 | 2968 | 0.2618 | 0.8 | 0.8767 | 0.8366 | 73 | 0.7667 | 0.7077 | 0.736 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.8065 | 0.8621 | 0.8333 | 29 | 0.8205 | 0.9412 | 0.8767 | 34 | 0.8365 | 0.8746 | 0.8552 | 0.9646 |
| 0.0027 | 29.0 | 3074 | 0.3309 | 0.75 | 0.8630 | 0.8025 | 73 | 0.6552 | 0.8769 | 0.75 | 65 | 0.8924 | 0.94 | 0.9156 | 150 | 0.75 | 0.8276 | 0.7869 | 29 | 0.6596 | 0.9118 | 0.7654 | 34 | 0.7745 | 0.9003 | 0.8327 | 0.9589 |
| 0.0025 | 30.0 | 3180 | 0.3092 | 0.8 | 0.8767 | 0.8366 | 73 | 0.6951 | 0.8769 | 0.7755 | 65 | 0.9038 | 0.94 | 0.9216 | 150 | 0.7812 | 0.8621 | 0.8197 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8191 | 0.9031 | 0.8591 | 0.9648 |
| 0.003 | 31.0 | 3286 | 0.3234 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.7 | 0.8615 | 0.7724 | 65 | 0.8981 | 0.94 | 0.9186 | 150 | 0.75 | 0.9310 | 0.8308 | 29 | 0.6977 | 0.8824 | 0.7792 | 34 | 0.8 | 0.9003 | 0.8472 | 0.9591 |
| 0.0061 | 32.0 | 3392 | 0.2889 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.6548 | 0.8462 | 0.7383 | 65 | 0.8917 | 0.9333 | 0.9121 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8072 | 0.8946 | 0.8486 | 0.9638 |
| 0.0028 | 33.0 | 3498 | 0.2616 | 0.8514 | 0.8630 | 0.8571 | 73 | 0.6625 | 0.8154 | 0.7310 | 65 | 0.8910 | 0.9267 | 0.9085 | 150 | 0.6857 | 0.8276 | 0.75 | 29 | 0.8649 | 0.9412 | 0.9014 | 34 | 0.8141 | 0.8860 | 0.8486 | 0.9660 |
| 0.0024 | 34.0 | 3604 | 0.2858 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6512 | 0.8615 | 0.7417 | 65 | 0.9097 | 0.94 | 0.9246 | 150 | 0.7647 | 0.8966 | 0.8254 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8119 | 0.8974 | 0.8525 | 0.9658 |
| 0.0016 | 35.0 | 3710 | 0.3019 | 0.8182 | 0.8630 | 0.8400 | 73 | 0.6951 | 0.8769 | 0.7755 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8294 | 0.9003 | 0.8634 | 0.9660 |
| 0.0017 | 36.0 | 3816 | 0.2798 | 0.7875 | 0.8630 | 0.8235 | 73 | 0.6986 | 0.7846 | 0.7391 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8271 | 0.8860 | 0.8556 | 0.9665 |
| 0.0012 | 37.0 | 3922 | 0.3007 | 0.75 | 0.8630 | 0.8025 | 73 | 0.7656 | 0.7538 | 0.7597 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.7273 | 0.8276 | 0.7742 | 29 | 0.7949 | 0.9118 | 0.8493 | 34 | 0.8218 | 0.8803 | 0.8501 | 0.9655 |
| 0.002 | 38.0 | 4028 | 0.3204 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.6707 | 0.8462 | 0.7483 | 65 | 0.9281 | 0.9467 | 0.9373 | 150 | 0.7353 | 0.8621 | 0.7937 | 29 | 0.7209 | 0.9118 | 0.8052 | 34 | 0.8098 | 0.8974 | 0.8514 | 0.9626 |
| 0.0017 | 39.0 | 4134 | 0.2832 | 0.8267 | 0.8493 | 0.8378 | 73 | 0.6279 | 0.8308 | 0.7152 | 65 | 0.8987 | 0.9467 | 0.9221 | 150 | 0.6562 | 0.7241 | 0.6885 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8031 | 0.8832 | 0.8412 | 0.9638 |
| 0.0019 | 40.0 | 4240 | 0.3074 | 0.7619 | 0.8767 | 0.8153 | 73 | 0.65 | 0.8 | 0.7172 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6857 | 0.8276 | 0.75 | 29 | 0.7632 | 0.8529 | 0.8056 | 34 | 0.7939 | 0.8889 | 0.8387 | 0.9646 |
| 0.0007 | 41.0 | 4346 | 0.3130 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.6506 | 0.8308 | 0.7297 | 65 | 0.9108 | 0.9533 | 0.9316 | 150 | 0.6667 | 0.8276 | 0.7385 | 29 | 0.8649 | 0.9412 | 0.9014 | 34 | 0.8098 | 0.8974 | 0.8514 | 0.9643 |
| 0.0013 | 42.0 | 4452 | 0.2825 | 0.8077 | 0.8630 | 0.8344 | 73 | 0.7015 | 0.7231 | 0.7121 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8857 | 0.9118 | 0.8986 | 34 | 0.8315 | 0.8718 | 0.8512 | 0.9683 |
| 0.0008 | 43.0 | 4558 | 0.3163 | 0.7625 | 0.8356 | 0.7974 | 73 | 0.7333 | 0.8462 | 0.7857 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.7429 | 0.8966 | 0.8125 | 29 | 0.8611 | 0.9118 | 0.8857 | 34 | 0.8272 | 0.9003 | 0.8622 | 0.9673 |
| 0.0035 | 44.0 | 4664 | 0.3363 | 0.7470 | 0.8493 | 0.7949 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.9091 | 0.9333 | 0.9211 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7984 | 0.8803 | 0.8374 | 0.9623 |
| 0.0032 | 45.0 | 4770 | 0.2655 | 0.7875 | 0.8630 | 0.8235 | 73 | 0.7067 | 0.8154 | 0.7571 | 65 | 0.8974 | 0.9333 | 0.9150 | 150 | 0.75 | 0.8276 | 0.7869 | 29 | 0.9394 | 0.9118 | 0.9254 | 34 | 0.8271 | 0.8860 | 0.8556 | 0.9660 |
| 0.0029 | 46.0 | 4876 | 0.2898 | 0.8026 | 0.8356 | 0.8188 | 73 | 0.7215 | 0.8769 | 0.7917 | 65 | 0.8987 | 0.9467 | 0.9221 | 150 | 0.6 | 0.7241 | 0.6562 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8078 | 0.8860 | 0.8451 | 0.9631 |
| 0.0011 | 47.0 | 4982 | 0.2948 | 0.8133 | 0.8356 | 0.8243 | 73 | 0.7333 | 0.8462 | 0.7857 | 65 | 0.9161 | 0.9467 | 0.9311 | 150 | 0.6111 | 0.7586 | 0.6769 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.8136 | 0.8832 | 0.8470 | 0.9648 |
| 0.0019 | 48.0 | 5088 | 0.2978 | 0.7470 | 0.8493 | 0.7949 | 73 | 0.6923 | 0.8308 | 0.7552 | 65 | 0.9051 | 0.9533 | 0.9286 | 150 | 0.6389 | 0.7931 | 0.7077 | 29 | 0.8649 | 0.9412 | 0.9014 | 34 | 0.8010 | 0.8946 | 0.8452 | 0.9646 |
| 0.0012 | 49.0 | 5194 | 0.3064 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.675 | 0.8308 | 0.7448 | 65 | 0.9097 | 0.94 | 0.9246 | 150 | 0.6944 | 0.8621 | 0.7692 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8151 | 0.8917 | 0.8517 | 0.9628 |
| 0.0005 | 50.0 | 5300 | 0.3279 | 0.8108 | 0.8219 | 0.8163 | 73 | 0.6627 | 0.8462 | 0.7432 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.6944 | 0.8621 | 0.7692 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.8041 | 0.8889 | 0.8444 | 0.9623 |
| 0.0022 | 51.0 | 5406 | 0.2888 | 0.8493 | 0.8493 | 0.8493 | 73 | 0.7368 | 0.8615 | 0.7943 | 65 | 0.9161 | 0.9467 | 0.9311 | 150 | 0.7333 | 0.7586 | 0.7458 | 29 | 0.8824 | 0.8824 | 0.8824 | 34 | 0.8478 | 0.8889 | 0.8679 | 0.9665 |
| 0.0018 | 52.0 | 5512 | 0.3415 | 0.7778 | 0.8630 | 0.8182 | 73 | 0.6585 | 0.8308 | 0.7347 | 65 | 0.9161 | 0.9467 | 0.9311 | 150 | 0.7059 | 0.8276 | 0.7619 | 29 | 0.6 | 0.8824 | 0.7143 | 34 | 0.7786 | 0.8917 | 0.8313 | 0.9579 |
| 0.0024 | 53.0 | 5618 | 0.3337 | 0.7349 | 0.8356 | 0.7821 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.9156 | 0.94 | 0.9276 | 150 | 0.7273 | 0.8276 | 0.7742 | 29 | 0.7317 | 0.8824 | 0.8 | 34 | 0.7995 | 0.8860 | 0.8405 | 0.9591 |
| 0.0012 | 54.0 | 5724 | 0.3097 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.6883 | 0.8154 | 0.7465 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.775 | 0.9118 | 0.8378 | 34 | 0.8073 | 0.8832 | 0.8435 | 0.9643 |
| 0.0016 | 55.0 | 5830 | 0.3207 | 0.7662 | 0.8082 | 0.7867 | 73 | 0.7183 | 0.7846 | 0.75 | 65 | 0.9108 | 0.9533 | 0.9316 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8138 | 0.8718 | 0.8418 | 0.9631 |
| 0.0013 | 56.0 | 5936 | 0.3148 | 0.7792 | 0.8219 | 0.8000 | 73 | 0.7231 | 0.7231 | 0.7231 | 65 | 0.9108 | 0.9533 | 0.9316 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.8189 | 0.8632 | 0.8405 | 0.9636 |
| 0.0012 | 57.0 | 6042 | 0.3097 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7162 | 0.8154 | 0.7626 | 65 | 0.9108 | 0.9533 | 0.9316 | 150 | 0.6562 | 0.7241 | 0.6885 | 29 | 0.8788 | 0.8529 | 0.8657 | 34 | 0.8226 | 0.8718 | 0.8465 | 0.9643 |
| 0.0005 | 58.0 | 6148 | 0.3341 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.6753 | 0.8 | 0.7324 | 65 | 0.8704 | 0.94 | 0.9038 | 150 | 0.7059 | 0.8276 | 0.7619 | 29 | 0.75 | 0.8824 | 0.8108 | 34 | 0.7883 | 0.8803 | 0.8318 | 0.9633 |
| 0.0013 | 59.0 | 6254 | 0.3232 | 0.8133 | 0.8356 | 0.8243 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.8868 | 0.94 | 0.9126 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.7317 | 0.8824 | 0.8 | 34 | 0.8026 | 0.8803 | 0.8397 | 0.9638 |
| 0.0012 | 60.0 | 6360 | 0.3059 | 0.7949 | 0.8493 | 0.8212 | 73 | 0.68 | 0.7846 | 0.7286 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.6875 | 0.7586 | 0.7213 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8100 | 0.8746 | 0.8411 | 0.9653 |
| 0.0014 | 61.0 | 6466 | 0.3144 | 0.7590 | 0.8630 | 0.8077 | 73 | 0.6790 | 0.8462 | 0.7534 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8061 | 0.9003 | 0.8506 | 0.9663 |
| 0.0002 | 62.0 | 6572 | 0.3230 | 0.7176 | 0.8356 | 0.7722 | 73 | 0.6709 | 0.8154 | 0.7361 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6286 | 0.7586 | 0.6875 | 29 | 0.8 | 0.9412 | 0.8649 | 34 | 0.7893 | 0.8860 | 0.8349 | 0.9653 |
| 0.0007 | 63.0 | 6678 | 0.3489 | 0.7722 | 0.8356 | 0.8026 | 73 | 0.7125 | 0.8769 | 0.7862 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6471 | 0.7586 | 0.6984 | 29 | 0.7619 | 0.9412 | 0.8421 | 34 | 0.8056 | 0.8974 | 0.8491 | 0.9636 |
| 0.0001 | 64.0 | 6784 | 0.3458 | 0.7722 | 0.8356 | 0.8026 | 73 | 0.7179 | 0.8615 | 0.7832 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6471 | 0.7586 | 0.6984 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.8083 | 0.8889 | 0.8467 | 0.9638 |
| 0.001 | 65.0 | 6890 | 0.3409 | 0.7439 | 0.8356 | 0.7871 | 73 | 0.6667 | 0.8 | 0.7273 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6286 | 0.7586 | 0.6875 | 29 | 0.7692 | 0.8824 | 0.8219 | 34 | 0.7897 | 0.8775 | 0.8313 | 0.9638 |
| 0.001 | 66.0 | 6996 | 0.3137 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.7576 | 0.8621 | 0.8065 | 29 | 0.9091 | 0.8824 | 0.8955 | 34 | 0.8324 | 0.8917 | 0.8611 | 0.9665 |
| 0.0007 | 67.0 | 7102 | 0.3459 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.6875 | 0.8462 | 0.7586 | 65 | 0.9108 | 0.9533 | 0.9316 | 150 | 0.7143 | 0.8621 | 0.7813 | 29 | 0.7317 | 0.8824 | 0.8 | 34 | 0.8051 | 0.8946 | 0.8475 | 0.9633 |
| 0.0004 | 68.0 | 7208 | 0.3155 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.68 | 0.7846 | 0.7286 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8143 | 0.8746 | 0.8434 | 0.9648 |
| 0.0004 | 69.0 | 7314 | 0.3485 | 0.7722 | 0.8356 | 0.8026 | 73 | 0.6582 | 0.8 | 0.7222 | 65 | 0.8987 | 0.9467 | 0.9221 | 150 | 0.6857 | 0.8276 | 0.75 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8005 | 0.8803 | 0.8385 | 0.9631 |
| 0.0003 | 70.0 | 7420 | 0.3382 | 0.7692 | 0.8219 | 0.7947 | 73 | 0.6625 | 0.8154 | 0.7310 | 65 | 0.9161 | 0.9467 | 0.9311 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8084 | 0.8775 | 0.8415 | 0.9638 |
| 0.001 | 71.0 | 7526 | 0.3148 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.6842 | 0.8 | 0.7376 | 65 | 0.9221 | 0.9467 | 0.9342 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8333 | 0.8824 | 0.8571 | 34 | 0.8196 | 0.8803 | 0.8489 | 0.9658 |
| 0.0003 | 72.0 | 7632 | 0.3217 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.7246 | 0.7692 | 0.7463 | 65 | 0.9281 | 0.9467 | 0.9373 | 150 | 0.6857 | 0.8276 | 0.75 | 29 | 0.8333 | 0.8824 | 0.8571 | 34 | 0.8347 | 0.8775 | 0.8556 | 0.9660 |
| 0.0007 | 73.0 | 7738 | 0.3234 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.6875 | 0.8462 | 0.7586 | 65 | 0.9156 | 0.94 | 0.9276 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8141 | 0.8860 | 0.8486 | 0.9655 |
| 0.0003 | 74.0 | 7844 | 0.3176 | 0.7821 | 0.8356 | 0.8079 | 73 | 0.6707 | 0.8462 | 0.7483 | 65 | 0.8981 | 0.94 | 0.9186 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8571 | 0.8824 | 0.8696 | 34 | 0.8031 | 0.8832 | 0.8412 | 0.9646 |
| 0.0005 | 75.0 | 7950 | 0.3431 | 0.7654 | 0.8493 | 0.8052 | 73 | 0.6835 | 0.8308 | 0.75 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.7059 | 0.8276 | 0.7619 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8041 | 0.8889 | 0.8444 | 0.9646 |
| 0.0005 | 76.0 | 8056 | 0.3416 | 0.8026 | 0.8356 | 0.8188 | 73 | 0.6875 | 0.8462 | 0.7586 | 65 | 0.9167 | 0.9533 | 0.9346 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8125 | 0.8889 | 0.8490 | 0.9643 |
| 0.0002 | 77.0 | 8162 | 0.3295 | 0.75 | 0.8219 | 0.7843 | 73 | 0.6835 | 0.8308 | 0.75 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.7984 | 0.8803 | 0.8374 | 0.9646 |
| 0.0001 | 78.0 | 8268 | 0.3368 | 0.7848 | 0.8493 | 0.8158 | 73 | 0.6835 | 0.8308 | 0.75 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8120 | 0.8860 | 0.8474 | 0.9651 |
| 0.0004 | 79.0 | 8374 | 0.3212 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9161 | 0.9467 | 0.9311 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8201 | 0.8832 | 0.8505 | 0.9663 |
| 0.0001 | 80.0 | 8480 | 0.3227 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.7188 | 0.7931 | 0.7541 | 29 | 0.8108 | 0.8824 | 0.8451 | 34 | 0.8228 | 0.8860 | 0.8532 | 0.9663 |
| 0.0003 | 81.0 | 8586 | 0.3216 | 0.8052 | 0.8493 | 0.8267 | 73 | 0.6923 | 0.8308 | 0.7552 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8120 | 0.8860 | 0.8474 | 0.9663 |
| 0.0004 | 82.0 | 8692 | 0.3134 | 0.7792 | 0.8219 | 0.8000 | 73 | 0.6923 | 0.8308 | 0.7552 | 65 | 0.9045 | 0.9467 | 0.9251 | 150 | 0.6471 | 0.7586 | 0.6984 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8073 | 0.8832 | 0.8435 | 0.9670 |
| 0.0003 | 83.0 | 8798 | 0.3101 | 0.8289 | 0.8630 | 0.8456 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8311 | 0.8974 | 0.8630 | 0.9700 |
| 0.0002 | 84.0 | 8904 | 0.3153 | 0.8158 | 0.8493 | 0.8322 | 73 | 0.7105 | 0.8308 | 0.7660 | 65 | 0.9221 | 0.9467 | 0.9342 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8421 | 0.9412 | 0.8889 | 34 | 0.8302 | 0.8917 | 0.8599 | 0.9683 |
| 0.0005 | 85.0 | 9010 | 0.3358 | 0.8133 | 0.8356 | 0.8243 | 73 | 0.6914 | 0.8615 | 0.7671 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8172 | 0.8917 | 0.8529 | 0.9655 |
| 0.0001 | 86.0 | 9116 | 0.3357 | 0.8026 | 0.8356 | 0.8188 | 73 | 0.7089 | 0.8615 | 0.7778 | 65 | 0.9103 | 0.9467 | 0.9281 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8194 | 0.8917 | 0.8540 | 0.9665 |
| 0.0002 | 87.0 | 9222 | 0.3371 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8184 | 0.8860 | 0.8509 | 0.9655 |
| 0.0001 | 88.0 | 9328 | 0.3303 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7051 | 0.8462 | 0.7692 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.7895 | 0.8824 | 0.8333 | 34 | 0.8175 | 0.8803 | 0.8477 | 0.9648 |
| 0.0001 | 89.0 | 9434 | 0.3300 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9156 | 0.94 | 0.9276 | 150 | 0.7273 | 0.8276 | 0.7742 | 29 | 0.7949 | 0.9118 | 0.8493 | 34 | 0.8179 | 0.8832 | 0.8493 | 0.9665 |
| 0.0001 | 90.0 | 9540 | 0.3355 | 0.7792 | 0.8219 | 0.8000 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9221 | 0.9467 | 0.9342 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.7949 | 0.9118 | 0.8493 | 34 | 0.8158 | 0.8832 | 0.8482 | 0.9660 |
| 0.0002 | 91.0 | 9646 | 0.3345 | 0.7792 | 0.8219 | 0.8000 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9156 | 0.94 | 0.9276 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8153 | 0.8803 | 0.8466 | 0.9658 |
| 0.0006 | 92.0 | 9752 | 0.3235 | 0.7895 | 0.8219 | 0.8054 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.6765 | 0.7931 | 0.7302 | 29 | 0.8158 | 0.9118 | 0.8611 | 34 | 0.8175 | 0.8803 | 0.8477 | 0.9660 |
| 0.0002 | 93.0 | 9858 | 0.3225 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.6471 | 0.7586 | 0.6984 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8175 | 0.8803 | 0.8477 | 0.9665 |
| 0.0001 | 94.0 | 9964 | 0.3228 | 0.8026 | 0.8356 | 0.8188 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8245 | 0.8832 | 0.8528 | 0.9665 |
| 0.0001 | 95.0 | 10070 | 0.3265 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9216 | 0.94 | 0.9307 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8223 | 0.8832 | 0.8516 | 0.9663 |
| 0.0001 | 96.0 | 10176 | 0.3283 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9156 | 0.94 | 0.9276 | 150 | 0.6970 | 0.7931 | 0.7419 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8201 | 0.8832 | 0.8505 | 0.9668 |
| 0.0002 | 97.0 | 10282 | 0.3329 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8206 | 0.8860 | 0.8521 | 0.9670 |
| 0.0001 | 98.0 | 10388 | 0.3322 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8206 | 0.8860 | 0.8521 | 0.9670 |
| 0.0001 | 99.0 | 10494 | 0.3324 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8206 | 0.8860 | 0.8521 | 0.9670 |
| 0.0003 | 100.0 | 10600 | 0.3325 | 0.7922 | 0.8356 | 0.8133 | 73 | 0.7013 | 0.8308 | 0.7606 | 65 | 0.9226 | 0.9533 | 0.9377 | 150 | 0.6667 | 0.7586 | 0.7097 | 29 | 0.8378 | 0.9118 | 0.8732 | 34 | 0.8206 | 0.8860 | 0.8521 | 0.9670 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
snabhi/RealLighting | snabhi | "2024-03-29T04:19:04Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-03-29T03:23:17Z" | ---
license: creativeml-openrail-m
---
AI_REAL_Lightning Edition - 6 Steps
TRY ME! CRAZY REALISM IN ONLY 6 STEPS!
Any liability arising from the improper or illegal use of this model is the sole responsibility of the end user. This checkpoint is provided free of charge to help advance the development of AI models in general. By downloading, you agree to the terms of the license agreement and assume all liabilities arising from your use of the model. Since a checkpoint developer cannot predict, control, or monitor the end use of the model or the legal jurisdiction the end user resides in; the developer assumes no responsibility for the acts of the end user.
V2 - 6 Steps and better results!
This version is pushing the model to even faster and better results. It is based off my current unpublished version of FULLY_REAL_XL (V9). It will perform at 4-6 steps!
All samples in the V2 checkpoint post created with the following:
DPM++ SDE Karras
CGG 2
800 X 1280
HiRes Fix 1.5 upscale (6 steps also)
Adetailer for eyes only where applicable
No LoRA's
6 steps!!!
I will also load a set of images without HiRes Fix or Adetailer for reference. They are still very good overall.
If you have the computing power to run normal models with 55 steps or more, I suggest you try my FULLY_REAL_XL model or one of artistic checkpoints.
Enjoy. If you like it and use it, kindly share some of your better results. |
rcodina/exemple1-finetuned-emotions | rcodina | "2024-03-23T15:34:03Z" | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-23T11:47:41Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: exemple1-finetuned-emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exemple1-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the dair-ai/emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.5272 | 0.8415 | 0.8255 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
michael0218/distilbert_fine | michael0218 | "2024-03-23T02:43:26Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-23T01:46:47Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert_fine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_fine
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1169
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1402 | 1.0 | 535 | 0.7980 | 0.5171 |
| 0.1318 | 2.0 | 1070 | 1.0522 | 0.5167 |
| 0.0983 | 3.0 | 1605 | 1.0457 | 0.5286 |
| 0.0723 | 4.0 | 2140 | 1.1169 | 0.5396 |
| 0.0429 | 5.0 | 2675 | 1.2072 | 0.5350 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
chinhnt19/qwen2B_1.3K_villa13B_llama8B | chinhnt19 | "2025-03-19T15:22:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"base_model:unsloth/Qwen2-VL-2B-Instruct",
"base_model:finetune:unsloth/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T15:20:48Z" | ---
base_model: unsloth/Qwen2-VL-2B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chinhnt19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dss107/news3 | dss107 | "2023-09-28T06:26:18Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-09-28T06:25:03Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dss107/news3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dss107/news3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gaudi/opus-mt-zle-en-ctranslate2 | gaudi | "2024-10-18T22:58:42Z" | 9 | 0 | transformers | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | "2024-07-17T00:18:21Z" | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-zle-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-zle-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-zle-en --output_dir ./ctranslate2/opus-mt-zle-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-zle-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-zle-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-zle-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-zle-en) by Helsinki-NLP.
|
gzlixiaochao/Llama-3.1-8B-bnb-4bit-wenyanwen | gzlixiaochao | "2024-09-09T07:54:32Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-09-09T07:12:29Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** gzlixiaochao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
houdini001/BERT2BERT_try_epoch15 | houdini001 | "2024-02-17T07:01:34Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-17T05:35:26Z" | ---
tags:
- generated_from_trainer
model-index:
- name: BERT2BERT_try_epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT2BERT_try_epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0145 | 1.57 | 1000 | 0.0147 |
| 0.0058 | 3.14 | 2000 | 0.0157 |
| 0.002 | 4.72 | 3000 | 0.0167 |
| 0.0012 | 6.29 | 4000 | 0.0154 |
| 0.0005 | 7.86 | 5000 | 0.0155 |
| 0.0004 | 9.43 | 6000 | 0.0159 |
| 0.0005 | 11.01 | 7000 | 0.0167 |
| 0.0004 | 12.58 | 8000 | 0.0172 |
| 0.0003 | 14.15 | 9000 | 0.0176 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
FounderOfHuggingface/gpt2_gen_lora_r16_wikitext2_t300_e20_non_member_shadow18 | FounderOfHuggingface | "2024-01-16T11:36:32Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-16T11:36:32Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
nblinh63/56151d9e-4d31-478c-ae4c-50c1fab59312 | nblinh63 | "2025-01-28T12:50:36Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T12:39:38Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56151d9e-4d31-478c-ae4c-50c1fab59312
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ee9d58a82aadb294_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ee9d58a82aadb294_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/56151d9e-4d31-478c-ae4c-50c1fab59312
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ee9d58a82aadb294_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 86210f0a-1e8f-416c-85dd-99b5aeedced8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 86210f0a-1e8f-416c-85dd-99b5aeedced8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 56151d9e-4d31-478c-ae4c-50c1fab59312
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5693 | 0.2673 | 200 | 1.4469 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bartowski/Everyone-Coder-4x7b-Base-exl2 | bartowski | "2024-01-14T22:27:25Z" | 1 | 1 | null | [
"merge",
"moe",
"text-generation",
"license:cc-by-4.0",
"region:us"
] | text-generation | "2024-01-14T21:47:30Z" | ---
license: cc-by-4.0
tags:
- merge
- moe
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Everyone-Coder-4x7b-Base
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/rombodawg/Everyone-Coder-4x7b-Base
<a href="https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2/tree/3_0">3.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2/tree/2_4">2.4 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Everyone-Coder-4x7b-Base-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Everyone-Coder-4x7b-Base-exl2`:
```shell
mkdir Everyone-Coder-4x7b-Base-exl2
huggingface-cli download bartowski/Everyone-Coder-4x7b-Base-exl2 --local-dir Everyone-Coder-4x7b-Base-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Everyone-Coder-4x7b-Base-exl2
huggingface-cli download bartowski/Everyone-Coder-4x7b-Base-exl2 --revision 4_0 --local-dir Everyone-Coder-4x7b-Base-exl2 --local-dir-use-symlinks False
```
|
facebook/fasttext-an-vectors | facebook | "2023-06-03T22:09:11Z" | 8 | 0 | fasttext | [
"fasttext",
"feature-extraction",
"an",
"arxiv:1607.04606",
"arxiv:1802.06893",
"arxiv:1607.01759",
"arxiv:1612.03651",
"license:cc-by-sa-3.0",
"region:us"
] | feature-extraction | "2023-03-17T17:01:52Z" |
---
license: cc-by-sa-3.0
tags:
- feature-extraction
library_name: fasttext
language: an
widget:
- text: apple
example_title: apple
---
# fastText (Aragonese)
fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/).
## Model description
fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes.
It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production.
## Intended uses & limitations
You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you.
### How to use
Here is how to load and use a pre-trained vectors
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-an-vectors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.words
['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...]
>>> len(model.words)
145940
>>> model['bread']
array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01,
-1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...])
```
Here is how to use this model to query nearest neighbors of an English word vector:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.get_nearest_neighbors("bread", k=5)
[(0.5641006231307983, 'butter'),
(0.48875734210014343, 'loaf'),
(0.4491206705570221, 'eat'),
(0.42444291710853577, 'food'),
(0.4229326844215393, 'cheese')]
```
Here is how to use this model to detect the language of a given text:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.predict("Hello, world!")
(('__label__eng_Latn',), array([0.81148803]))
>>> model.predict("Hello, world!", k=5)
(('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'),
array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415]))
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1.
```python
>>> import numpy as np
>>> def cosine_similarity(word1, word2):
>>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2]))
>>> cosine_similarity("man", "boy")
0.061653383
>>> cosine_similarity("man", "ceo")
0.11989131
>>> cosine_similarity("woman", "ceo")
-0.08834904
```
## Training data
Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish.
## Training procedure
### Tokenization
We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893).
### License
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/).
### Evaluation datasets
The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt).
### BibTeX entry and citation info
Please cite [1] if using this code for learning word representations or [2] if using for text classification.
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
```markup
@article{bojanowski2016enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.04606},
year={2016}
}
```
[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
```markup
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651)
```markup
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
If you use these word vectors, please cite the following paper:
[4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
```markup
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
(\* These authors contributed equally.)
|
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf | RichardErkhov | "2024-06-22T23:23:47Z" | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T19:04:58Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-intermediate-step-715k-1.5T - GGUF
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-intermediate-step-715k-1.5T.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-715k-1.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-715k-1.5T.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is an intermediate checkpoint with 715K steps and 1.49T tokens. **We suggest you not use this directly for inference.**
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.49T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
|
Jukaboo/Llama2_7B_chat_dialogsum_ft_adapters_v2400 | Jukaboo | "2023-09-11T12:14:06Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2023-09-11T11:56:50Z" | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama2_7B_chat_dialogsum_ft_adapters_v2400
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2_7B_chat_dialogsum_ft_adapters_v2400
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Helsinki-NLP/opus-mt-fi-zne | Helsinki-NLP | "2023-08-16T11:35:56Z" | 181 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"zne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-zne
* source languages: fi
* target languages: zne
* OPUS readme: [fi-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.zne | 22.7 | 0.464 |
|
manadopeee/segformer-b0-scene-parse-150_epoch_100_230609 | manadopeee | "2023-06-08T23:55:29Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2023-06-08T23:49:36Z" | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150_epoch_100_230609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150_epoch_100_230609
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7126
- Mean Iou: 0.1053
- Mean Accuracy: 0.1994
- Overall Accuracy: 0.5447
- Per Category Iou: [0.48741983413436024, 0.34708122936068353, 0.8494644532893246, 0.3618389507826823, 0.016919144195669256, 0.746579767268802, 0.0, 0.4008814740204453, 0.26432782122527576, 0.0, 0.0, 0.2358305940560507, 0.13905866374131537, nan, 0.0, 0.0, 0.5318380393695908, 0.0, 0.0, 0.041298586572438165, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.7865618692274757, 0.9652097859624402, 0.9908729919072352, 0.5594874236350619, 0.12989690721649486, 0.8943671630094044, nan, 0.8825049920983964, 0.29573472254593786, nan, 0.0, 0.9468519337392428, 0.16706413957574998, nan, 0.0, 0.0, 0.5378679869020947, 0.0, 0.0, 0.21969845310358332, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.335 | 20.0 | 100 | 3.5913 | 0.0958 | 0.1968 | 0.4914 | [0.4372210968359756, 0.3028306951772656, 0.9033017061947888, 0.3690449269582307, 0.05890453885736904, 0.521817339647163, 0.0, 0.3631349261471501, 0.05912798485639358, nan, 0.0, 0.23295937758137303, 0.12080500701413618, 0.0, 0.0, 0.0, 0.40666846895557357, 0.0, 0.0, 0.15182824063896827, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7828715313882603, 0.9935011297101626, 0.9796020050730765, 0.6524082439607645, 0.5454753722794959, 0.5683581504702194, nan, 0.7686116453711304, 0.05922631608786308, nan, 0.0, 0.9736725738970713, 0.14051713317434417, nan, 0.0, 0.0, 0.41243778756116795, 0.0, 0.0, 0.40571764245153713, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.2088 | 40.0 | 200 | 2.9755 | 0.1102 | 0.2011 | 0.5560 | [0.45422192073986317, 0.3436668041953486, 0.8903445028964444, 0.36640300640210627, 0.08482177830003917, 0.696578291411738, 0.0, 0.3924824887368871, 0.1146148769912978, nan, 0.0, 0.2583488263193765, 0.09984717269485481, nan, 0.0, 0.0, 0.6613259967945657, 0.0, 0.0, 0.044113233970191054, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7754602727995596, 0.9612375792619227, 0.9850827394612875, 0.5917451364483808, 0.3307369224894998, 0.9034090909090909, nan, 0.8772617574636465, 0.11706677921472532, nan, 0.0, 0.9477023027994149, 0.11070666499309652, nan, 0.0, 0.0, 0.6691055509423911, 0.0, 0.0, 0.17270413158410025, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.8764 | 60.0 | 300 | 2.8496 | 0.1046 | 0.1910 | 0.5299 | [0.44823292205691717, 0.3374611910810048, 0.8521673994463442, 0.36771300448430494, 0.011525925925925926, 0.6769752103220841, nan, 0.4127585356400409, 0.19237793012603657, nan, 0.0, 0.23536215301960003, 0.10166928075285682, nan, 0.0, 0.0, 0.502039728794969, 0.0, 0.0, 0.044836210577685595, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.00027059937762143147, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7515036597549289, 0.9295206627632954, 0.9876570237951443, 0.5829066103598283, 0.08911798396334479, 0.9305789576802508, nan, 0.8610542821791275, 0.20239752562253144, nan, 0.0, 0.9655940678254362, 0.1139073678925568, nan, 0.0, 0.0, 0.5067709067740402, 0.0, 0.0, 0.14392010965341687, nan, nan, nan, 0.0, nan, nan, nan, 0.0002748763056624519, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.6882 | 80.0 | 400 | 2.6676 | 0.1123 | 0.2036 | 0.5699 | [0.4826916675912571, 0.35289291208668705, 0.8613952449463594, 0.3690071358526864, 0.04114119410882794, 0.7420633159137224, 0.0, 0.39243581224605395, 0.26480929728158487, nan, 0.0, 0.242911210420564, 0.12443874278383579, nan, 0.0, 0.0, 0.6824408307674852, 0.0, 0.0, 0.04806344199088679, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7805655799539217, 0.9613226112096402, 0.9898462978620607, 0.573010513921042, 0.1424971363115693, 0.9007004310344827, nan, 0.8826764997733648, 0.29841193849332587, nan, 0.0, 0.9540290486070955, 0.14610267352830425, nan, 0.0, 0.0, 0.6910372308479692, 0.0, 0.0, 0.21480321127863716, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 1.9454 | 100.0 | 500 | 2.7126 | 0.1053 | 0.1994 | 0.5447 | [0.48741983413436024, 0.34708122936068353, 0.8494644532893246, 0.3618389507826823, 0.016919144195669256, 0.746579767268802, 0.0, 0.4008814740204453, 0.26432782122527576, 0.0, 0.0, 0.2358305940560507, 0.13905866374131537, nan, 0.0, 0.0, 0.5318380393695908, 0.0, 0.0, 0.041298586572438165, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.7865618692274757, 0.9652097859624402, 0.9908729919072352, 0.5594874236350619, 0.12989690721649486, 0.8943671630094044, nan, 0.8825049920983964, 0.29573472254593786, nan, 0.0, 0.9468519337392428, 0.16706413957574998, nan, 0.0, 0.0, 0.5378679869020947, 0.0, 0.0, 0.21969845310358332, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rithwik-db/gpl-e5-base-unsupervised-curated | rithwik-db | "2023-04-21T19:12:26Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-21T19:12:21Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/gpl-e5-base-unsupervised-curated
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/gpl-e5-base-unsupervised-curated')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/gpl-e5-base-unsupervised-curated')
model = AutoModel.from_pretrained('rithwik-db/gpl-e5-base-unsupervised-curated')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/gpl-e5-base-unsupervised-curated)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3952 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1185,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
IneG/glue_sst_classifier | IneG | "2022-04-26T11:44:29Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-26T11:15:24Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sail-rvc/randy | sail-rvc | "2023-07-14T07:42:50Z" | 2 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:42:37Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# randy
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:42:49
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
otski/ppo-pyramids | otski | "2025-03-23T19:02:07Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2025-03-23T19:02:05Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: otski/ppo-pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DrAliGomaa/whisper-large-v3-test-moreaugmenting | DrAliGomaa | "2025-04-02T13:51:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-02T13:51:33Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
vertings6/49e33840-10fa-4b94-8b81-24059ae145d9 | vertings6 | "2025-01-11T00:05:57Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-11T00:05:00Z" | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 49e33840-10fa-4b94-8b81-24059ae145d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 07f66aa9c46a42db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/07f66aa9c46a42db_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vertings6/49e33840-10fa-4b94-8b81-24059ae145d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/07f66aa9c46a42db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 39fbcf41-0f63-4b73-832a-f1900d583c56
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 39fbcf41-0f63-4b73-832a-f1900d583c56
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 49e33840-10fa-4b94-8b81-24059ae145d9
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0033 | 1 | nan |
| 0.0 | 0.0262 | 8 | nan |
| 0.0 | 0.0523 | 16 | nan |
| 0.0 | 0.0785 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
S1-sa/Sal | S1-sa | "2023-09-29T08:31:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-09-29T08:29:08Z" | # ⚠️ Type of model/library unknown.
# Feel free to open a Pull request
# for integration of the huggingface model hub
# into the corresponding library =) |
iamehreen/my-pet-cat-qaz | iamehreen | "2024-02-27T11:09:59Z" | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-27T11:05:59Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-QAZ Dreambooth model trained by iamehreen following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2101530109016
Sample pictures of this concept:

|
SparseLLM/reglu-25B | SparseLLM | "2024-02-07T02:29:33Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-14T05:34:42Z" | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
neopolita/qwen2-math-1.5b-instruct-gguf | neopolita | "2024-08-08T20:37:49Z" | 18 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-08T20:31:25Z" | ---
{}
---
# GGUF quants for [**Qwen/Qwen2-Math-1.5B-Instruct**](https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
ygmrdgan/bert-finetuned-ner_lr0.001_bs16 | ygmrdgan | "2023-11-07T15:56:44Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-11-07T15:00:27Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: ygmrdgan/bert-finetuned-ner_lr0.001_bs16
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ygmrdgan/bert-finetuned-ner_lr0.001_bs16
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3166
- Validation Loss: 0.5051
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.001, 'decay_steps': 321, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4170 | 0.4831 | 0 |
| 0.3191 | 0.5046 | 1 |
| 0.3166 | 0.5051 | 2 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
owanr/google-t5-v1_1-small-intra_model | owanr | "2023-10-30T20:46:28Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"generated_from_trainer",
"base_model:google/t5-v1_1-small",
"base_model:finetune:google/t5-v1_1-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2023-10-25T02:03:52Z" | ---
license: apache-2.0
base_model: google/t5-v1_1-small
tags:
- generated_from_trainer
model-index:
- name: google-t5-v1_1-small-intra_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-t5-v1_1-small-intra_model
This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6973
- Losses: [0.4, 0.8, 0.8, 1, 0.0, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 1.0, 1, 1, 1.0, 1, 1.0, 0.6000000000000001, 0.4, 0.2, 0.6000000000000001, 0.8, 0.8, 0.0, 0.8, 0.8, 0.6000000000000001, 1, 0.8, 0.8, 1, 0.8, 0.4, 0.8, 0.8, 0.4, 1, 1, 0.4, 0.8, 0.2, 1, 1, 0.4, 1, 1, 0.8, 1, 1, 1, 1, 0.6000000000000001, 1, 0.8, 0.0, 0.8, 0.0, 0.8, 1, 1, 0.4, 0.4, 0.2, 0.4, 0.8, 0.8, 0.4, 1, 0.2, 0.4, 0.8, 1, 1, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 0.8, 1, 0.8, 1, 0.0, 1, 0.0, 0.8, 0.8, 0.8, 1, 0.8, 0.8, 0.4, 1, 0.8, 0.8, 0.8, 0.8, 0.0, 1, 0.8, 0.6000000000000001, 0.0, 1, 0.8, 1, 1, 1, 1, 0.0, 0.8, 1, 1, 0.8, 1, 1, 1, 0.4, 0.4, 1, 1, 0.8, 0.8, 0.6000000000000001, 0.0, 0.6000000000000001, 0.2, 1.0, 0.8, 0.8, 0.8, 1, 0.8, 0.8, 0.6000000000000001, 1, 0.8, 0.8, 1, 1, 0.8, 0.6000000000000001, 0.4, 0.8, 0.0, 0.2, 0.8, 0.8, 0.6000000000000001, 0.8, 1, 0.8, 0.4, 1, 1, 1.0, 0.8, 0.8, 1, 1, 1, 0.8, 1.0, 0.4, 0.8, 0.4, 1, 0.4, 0.0, 0.8, 0.8, 0.0, 1, 0.8, 1, 0.6000000000000001, 1, 1.0, 0.8, 1.0, 0.4, 0.4, 0.8, 0.8, 0.6000000000000001, 1, 0.4, 1, 1, 0.2, 0.0, 0.6000000000000001, 0.4, 0.2, 0.2, 0.8, 0.8, 0.8, 1, 0.8, 1, 1, 0.8, 0.8, 0.6000000000000001, 0.4, 1, 0.4, 0.0, 1, 0.8, 0.2, 0.6000000000000001, 0.6000000000000001, 0.2, 0.4, 0.8, 0.6000000000000001, 1.0, 0.8, 1, 0.8, 0.8, 0.8, 0.8, 0.4, 0.4, 1, 0.8, 0.2, 0.2, 1, 0.8, 0.8, 0.8, 1, 1, 0.0, 0.4, 0.6000000000000001, 1, 1, 0.8, 0.8, 0.8, 0.8, 1, 0.8, 0.8, 0.4, 1, 0.4, 1, 1, 0.8, 1, 1, 0.8, 0.8, 0.0, 0.4, 1, 1, 1.0, 1, 0.8, 0.4, 1, 0.6000000000000001, 1, 0.0, 1, 1, 0.8, 0.8, 0.6000000000000001, 1, 1, 0.2, 0.8, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 0.8, 0.4, 1, 0.2, 0.8, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 0.4, 1, 0.4, 0.0, 1, 1, 0.8, 1, 1, 0.8, 1, 0.2, 0.4, 0.8, 0.6000000000000001, 0.8, 0.4, 0.4, 0.8, 1, 0.0, 0.6000000000000001, 0.6000000000000001, 1, 1, 0.0, 0.8, 1, 0.8, 0.8, 0.8, 0.4, 0.4, 0.8, 1, 1, 1, 0.6000000000000001, 0.0, 0.8, 0.8, 0.8, 0.4, 1, 1, 0.4, 0.4, 0.8, 0.8, 1, 0.4, 0.2, 0.6000000000000001, 1, 1, 1, 0.8, 1, 1, 0.8, 0.4, 0.4, 0.8, 1, 0.8, 1, 0.4, 0.6000000000000001, 0.4, 1]
- Train Loss: 0.7164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Losses | Train Loss |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------:|
| 13.7913 | 1.0 | 99 | 11.9227 | [1.0, 1.0, 1, 1, 1.0, 1.0, 1, 0.8888888888888888, 0.875, 0.8461538461538461, 0.875, 0.8888888888888888, 1.0, 0.8, 1.0, 0.8888888888888888, 1.0, 1.0, 1, 1.0, 1, 1.0, 1.0, 1.0, 1.0, 0.85, 1.0, 1.0, 0.8235294117647058, 0.5555555555555556, 0.8888888888888888, 1, 1.0, 10.0, 1, 0.8888888888888888, 1.0, 1.0, 1.0, 0.8888888888888888, 1.0, 1, 0.8571428571428571, 0.6666666666666666, 0.8888888888888888, 0.8888888888888888, 0.8888888888888888, 1.0, 0.8, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.8888888888888888, 1, 1.0, 0.8888888888888888, 1.0, 1, 1, 0.5555555555555556, 1.0, 1.0, 1.0, 1.0, 1, 0.8235294117647058, 1.0, 0.3333333333333333, 1.0, 0.8888888888888888, 0.8571428571428571, 1, 1, 1, 1.0, 0.8, 1.0, 1.0, 1, 1.0, 1, 0.9090909090909091, 0.875, 1.0, 1, 1.0, 0.8461538461538461, 1.0, 1.0, 1, 0.8571428571428571, 1.0, 1, 0.8888888888888888, 0.8888888888888888, 1.0, 1.0, 1.0, 0.7777777777777778, 1, 0.8666666666666667, 1.0, 1, 0.8888888888888888, 1, 0.8888888888888888, 1.0, 1.0, 1, 1.0, 1.0, 1.0, 1.0, 1.0, 0.8888888888888888, 0.8, 0.8888888888888888, 1.0, 1.0, 1.0, 1.0, 0.875, 1.0, 1.0, 1.0, 0.8888888888888888, 0.8888888888888888, 1, 0.8, 0.8, 1.0, 1, 1, 1.0, 1.0, 1.0, 1.0, 0.8888888888888888, 1, 1.0, 1.0, 1.0, 0.8888888888888888, 0.8461538461538461, 1.0, 1.0, 0.9090909090909091, 1.0, 0.8181818181818182, 0.8, 0.8888888888888888, 0.8, 1, 1.0, 1, 0.9090909090909091, 1.0, 1.0, 1.0, 0.75, 1, 1.0, 1.0, 0.8888888888888888, 0.8235294117647058, 1.0, 1.0, 1.0, 0.8235294117647058, 1.0, 1.0, 1.0, 0.8888888888888888, 0.8235294117647058, 1.0, 1.0, 10.0, 1.0, 0.8888888888888888, 1.0, 1.0, 1, 1, 1.0, 1.0, 1, 1, 1.0, 0.8888888888888888, 1.0, 1.0, 0.8888888888888888, 0.8888888888888888, 1.0, 1.0, 0.8, 0.8888888888888888, 1.0, 1.0, 0.8888888888888888, 1.0, 0.875, 0.8888888888888888, 1.0, 0.5555555555555556, 0.8888888888888888, 1.0, 1, 0.875, 0.8888888888888888, 1.0, 10.0, 1, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1, 1.0, 1.0, 1.0, 1.0, 1.0, 1, 0.6666666666666666, 1.0, 1.0, 1.0, 0.8571428571428571, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.8888888888888888, 0.8888888888888888, 1.0, 0.7777777777777778, 1, 1.0, 1.0, 0.8461538461538461, 1.0, 0.8, 0.8888888888888888, 1.0, 1, 1.0, 1, 0.8, 1.0, 0.8, 0.8888888888888888, 1, 1.0, 1.0, 1.0, 1.0, 0.8181818181818182, 0.875, 0.7777777777777778, 0.8888888888888888, 10.0, 0.8888888888888888, 0.875, 1.0, 0.8888888888888888, 0.8888888888888888, 0.8, 1.0, 1.0, 1.0, 0.8888888888888888, 1.0, 1, 0.8125, 1.0, 0.9090909090909091, 1.0, 1.0, 0.8888888888888888, 1.0, 1.0, 0.8888888888888888, 0.8888888888888888, 0.75, 1, 1, 0.9090909090909091, 1.0, 0.75, 1, 0.875, 1.0, 1.0, 0.9, 1, 1.0, 0.4444444444444444, 1.0, 1, 1.0, 1, 1, 0.8888888888888888, 1.0, 1, 1.0, 0.8888888888888888, 1.0, 1.0, 1.0, 1.0, 0.8888888888888888, 1.0, 0.875, 1.0, 1.0, 1.0, 1.0, 1.0, 0.8, 1.0, 0.8888888888888888, 1.0, 1.0, 0.8888888888888888, 0.875, 0.4444444444444444, 1.0, 1.0, 1.0, 0.8888888888888888, 10.0, 0.7777777777777778, 1.0, 1.0, 1.0, 0.8461538461538461, 1, 0.8888888888888888, 0.8888888888888888, 0.8125, 0.6666666666666666, 1.0, 1.0, 0.8888888888888888, 1, 0.8461538461538461, 1.0] | 1.0697 |
| 6.0033 | 2.0 | 198 | 4.6189 | [1, 0.8, 1, 1, 1, 1.0, 1, 0.8, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1.0, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 1.0, 1, 1, 1, 1.0, 1, 1, 0.8, 1, 1, 0.8, 1, 0.8, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1.0, 1, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 0.8, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 0.8, 1, 1.0, 1, 0.8, 0.8, 1, 0.8, 1, 1, 1, 0.8, 1, 1, 1, 0.8, 1, 1, 0.8, 1, 0.8, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 0.8, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1, 0.8, 1.0, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 0.8, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 1, 1, 1] | 0.9694 |
| 2.5509 | 3.0 | 297 | 1.0645 | [1.0, 0.6000000000000001, 1.0, 1, 0.6000000000000001, 0.0, 0.6000000000000001, 0.6000000000000001, 0.4, 1.0, 0.8, 1, 1, 0.6000000000000001, 0.4, 0.4, 1.0, 1, 1, 1.0, 1, 1.0, 1.0, 0.6000000000000001, 0.4, 0.6000000000000001, 0.6000000000000001, 0.8, 0.6000000000000001, 1.0, 1.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.8, 0.8, 1.0, 1.0, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 0.8, 1, 1.0, 0.8, 1, 1.0, 0.6000000000000001, 1.0, 1.0, 1.0, 1.0, 0.6000000000000001, 1.0, 1.0, 1, 1.0, 1.0, 1.0, 1, 1.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.8, 1, 1.0, 0.6000000000000001, 1, 0.6000000000000001, 0.4, 1.0, 1.0, 1.0, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 0.8, 1.0, 1.0, 1.0, 1.0, 1.0, 1, 0.6000000000000001, 1, 1.0, 0.6000000000000001, 1.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.6000000000000001, 1.0, 1, 0.6000000000000001, 0.4, 1.0, 0.6000000000000001, 0.6000000000000001, 1, 1, 0.6000000000000001, 0.6000000000000001, 1.0, 1, 0.4, 0.8, 0.6000000000000001, 0.6000000000000001, 1, 0.6000000000000001, 1.0, 0.6000000000000001, 1, 1, 1.0, 0.6000000000000001, 0.0, 1.0, 1, 1, 0.4, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 0.8, 0.6000000000000001, 0.6000000000000001, 0.4, 0.6000000000000001, 1, 0.6000000000000001, 0.6000000000000001, 1, 0.6000000000000001, 0.8, 1.0, 0.6000000000000001, 1.0, 1, 0.6000000000000001, 0.6000000000000001, 0.8, 0.6000000000000001, 1.0, 1.0, 1, 1, 1, 0.8, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 1, 1.0, 0.0, 1.0, 0.6000000000000001, 0.6000000000000001, 1, 0.8, 1.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.8, 1, 1, 1, 0.6000000000000001, 0.4, 0.6000000000000001, 0.4, 1.0, 1.0, 0.6000000000000001, 0.8, 0.4, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 1, 0.6000000000000001, 0.4, 1.0, 0.6000000000000001, 1, 1, 1.0, 1.0, 1, 1.0, 1.0, 1.0, 1, 0.6000000000000001, 1, 1.0, 1.0, 1.0, 1.0, 1.0, 0.6000000000000001, 1, 1, 1.0, 1.0, 1.0, 0.6000000000000001, 1, 0.0, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 0.6000000000000001, 1, 1.0, 1, 0.6000000000000001, 0.6000000000000001, 1, 1, 0.6000000000000001, 1, 0.8, 0.6000000000000001, 1.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.4, 1, 1, 0.6000000000000001, 0.6000000000000001, 0.8, 1, 1.0, 1, 0.6000000000000001, 1.0, 0.8, 0.8, 0.6000000000000001, 1, 1, 0.6000000000000001, 0.6000000000000001, 1, 0.6000000000000001, 1.0, 1.0, 0.8, 0.6000000000000001, 0.0, 0.6000000000000001, 0.6000000000000001, 1.0, 0.8, 0.4, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 0.6000000000000001, 0.6000000000000001, 0.4, 1, 1, 1, 1.0, 0.4, 1, 0.6000000000000001, 0.4, 0.6000000000000001, 0.6000000000000001, 1, 1, 0.6000000000000001, 0.4, 1, 1, 0.4, 1.0, 1.0, 0.6000000000000001, 1.0, 1.0, 0.8, 1.0, 1.0, 0.6000000000000001, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 0.6000000000000001, 0.4, 1, 1.0, 0.6000000000000001, 1.0, 1, 1.0, 0.4, 1, 1, 0.4, 0.6000000000000001, 1.0, 1, 1.0, 0.6000000000000001, 1.0, 1.0, 1.0, 0.6000000000000001, 1, 1, 1, 1, 1.0, 0.6000000000000001, 1.0, 0.6000000000000001, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 1.0, 0.6000000000000001, 0.6000000000000001, 0.6000000000000001, 0.4, 1, 1, 0.6000000000000001, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 1.0, 0.8, 1.0, 1.0, 1, 0.6000000000000001, 1.0, 0.4, 0.6000000000000001, 1] | 0.7944 |
| 1.323 | 4.0 | 396 | 0.7302 | [0.4, 0.8, 0.4, 1, 0.4, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 1, 1, 1, 1, 0.8, 1.0, 0.6000000000000001, 0.4, 0.8, 0.8, 1, 0.8, 0.4, 0.4, 0.8, 0.6000000000000001, 0.8, 0.8, 1, 0.8, 0.4, 0.4, 0.8, 0.8, 0.4, 1, 1, 0.4, 0.4, 0.8, 0.8, 0.8, 0.4, 1, 1, 0.4, 1, 1, 1, 0.8, 0.8, 1, 0.4, 0.4, 0.8, 0.4, 1, 1, 1, 0.4, 0.4, 0.8, 0.4, 0.8, 0.8, 0.4, 1, 0.8, 0.4, 0.8, 1, 0.8, 1.0, 1, 1, 0.8, 0.8, 0.8, 0.8, 1, 0.4, 1, 0.4, 0.4, 0.8, 0.8, 0.8, 0.8, 1, 0.0, 1, 0.4, 0.8, 0.4, 0.4, 0.4, 1, 0.8, 1.0, 0.4, 0.8, 0.4, 0.8, 1, 0.8, 0.8, 0.4, 0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.4, 0.4, 1, 1, 0.4, 0.8, 0.6000000000000001, 0.4, 1, 0.8, 1.0, 0.4, 0.4, 0.8, 1, 0.8, 0.8, 1.0, 0.8, 0.8, 0.8, 0.8, 1, 0.8, 0.6000000000000001, 0.4, 0.8, 0.4, 0.8, 0.8, 0.8, 0.6000000000000001, 0.8, 1, 1, 1, 0.8, 1, 0.6000000000000001, 0.8, 0.8, 0.8, 1, 1, 1, 0.6000000000000001, 0.4, 0.8, 0.0, 1, 0.6000000000000001, 0.4, 0.8, 0.4, 0.4, 1, 1, 1, 0.8, 0.8, 1.0, 0.8, 1.0, 0.4, 0.4, 0.4, 1, 1.0, 0.8, 0.0, 0.8, 1, 0.8, 0.4, 0.6000000000000001, 0.4, 0.8, 0.8, 0.8, 1, 1, 0.8, 0.8, 1, 1, 0.8, 0.8, 1.0, 0.4, 1, 0.4, 0.4, 1, 0.8, 0.8, 0.8, 1, 0.8, 0.4, 0.8, 0.8, 0.6000000000000001, 0.4, 0.8, 0.8, 1, 0.8, 0.8, 0.4, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1, 0.8, 1, 0.8, 0.4, 0.4, 0.6000000000000001, 1, 1, 0.8, 0.8, 1.0, 1, 1, 0.8, 0.8, 0.4, 1, 0.6000000000000001, 0.8, 1, 0.8, 0.8, 0.8, 0.8, 0.8, 0.4, 0.4, 1, 0.8, 0.6000000000000001, 0.8, 0.8, 0.4, 1, 0.6000000000000001, 0.8, 0.4, 0.8, 1, 0.8, 0.8, 0.6000000000000001, 1, 0.8, 0.8, 0.4, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 0.8, 0.4, 1.0, 0.8, 0.4, 1.0, 0.8, 1, 0.6000000000000001, 0.4, 1, 0.4, 0.4, 1, 1, 1, 1, 0.8, 0.4, 1, 0.8, 0.0, 0.8, 1.0, 0.8, 0.4, 0.4, 0.4, 1, 0.4, 0.6000000000000001, 0.8, 1, 1, 0.4, 0.8, 1, 1, 0.4, 0.8, 0.4, 0.4, 0.8, 1, 1, 1, 0.8, 0.4, 0.8, 0.4, 0.8, 0.4, 1, 1, 0.4, 0.4, 0.8, 0.4, 0.8, 0.4, 0.8, 0.6000000000000001, 1, 1, 0.8, 0.8, 1, 1, 0.4, 0.4, 0.6000000000000001, 0.4, 1, 0.8, 0.8, 0.4, 0.6000000000000001, 0.0, 1] | 0.7365 |
| 1.1398 | 5.0 | 495 | 0.6483 | [0.4, 0.8, 0.4, 1, 0.4, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 1, 1, 1, 1, 0.8, 1.0, 0.6000000000000001, 0.4, 0.8, 0.8, 1, 0.8, 0.4, 0.4, 0.8, 0.6000000000000001, 0.8, 0.8, 1, 0.8, 0.4, 0.4, 0.8, 0.8, 0.4, 1, 1, 0.4, 0.4, 0.8, 0.8, 0.8, 0.4, 1, 1, 0.4, 1, 1, 1, 0.8, 0.8, 1, 0.4, 0.4, 0.8, 0.4, 1, 1, 1, 0.4, 0.4, 0.8, 0.4, 0.8, 0.8, 0.4, 1, 0.8, 0.4, 0.8, 1, 0.8, 1.0, 1, 1, 0.8, 0.8, 0.8, 0.8, 1, 0.4, 1, 0.4, 0.4, 0.8, 0.8, 0.8, 0.8, 1, 0.0, 1, 0.4, 0.8, 0.4, 0.4, 0.4, 1, 0.8, 1.0, 0.4, 0.8, 0.4, 0.8, 1, 0.8, 0.8, 0.4, 0.8, 1, 1, 0.8, 0.8, 0.8, 0.8, 0.4, 0.4, 1, 1, 0.4, 0.8, 0.6000000000000001, 0.4, 1, 0.8, 1.0, 0.4, 0.4, 0.8, 1, 0.8, 0.8, 1.0, 0.8, 0.8, 0.8, 0.8, 1, 0.8, 0.6000000000000001, 0.4, 0.8, 0.4, 0.8, 0.8, 0.8, 0.6000000000000001, 0.8, 1, 1, 1, 0.8, 1, 0.6000000000000001, 0.8, 0.8, 0.8, 1, 1, 1, 0.6000000000000001, 0.4, 0.8, 0.0, 1, 0.6000000000000001, 0.4, 0.8, 0.4, 0.4, 1, 1, 1, 0.8, 0.8, 1.0, 0.8, 1.0, 0.4, 0.4, 0.4, 1, 1.0, 0.8, 0.0, 0.8, 1, 0.8, 0.4, 0.6000000000000001, 0.4, 0.8, 0.8, 0.8, 1, 1, 0.8, 0.8, 1, 1, 0.8, 0.8, 1.0, 0.4, 1, 0.4, 0.4, 1, 0.8, 0.8, 0.8, 1, 0.8, 0.4, 0.8, 0.8, 0.6000000000000001, 0.4, 0.8, 0.8, 1, 0.8, 0.8, 0.4, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1, 0.8, 1, 0.8, 0.4, 0.4, 0.6000000000000001, 1, 1, 0.8, 0.8, 1.0, 1, 1, 0.8, 0.8, 0.4, 1, 0.6000000000000001, 0.8, 1, 0.8, 0.8, 0.8, 0.8, 0.8, 0.4, 0.4, 1, 0.8, 0.6000000000000001, 0.8, 0.8, 0.4, 1, 0.6000000000000001, 0.8, 0.4, 0.8, 1, 0.8, 0.8, 0.6000000000000001, 1, 0.8, 0.8, 0.4, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 0.8, 0.4, 1.0, 0.8, 0.4, 1.0, 0.8, 1, 0.6000000000000001, 0.4, 1, 0.4, 0.4, 1, 1, 1, 1, 0.8, 0.4, 1, 0.8, 0.0, 0.8, 1.0, 0.8, 0.4, 0.4, 0.4, 1, 0.4, 0.6000000000000001, 0.8, 1, 1, 0.4, 0.8, 1, 1, 0.4, 0.8, 0.4, 0.4, 0.8, 1, 1, 1, 0.8, 0.4, 0.8, 0.4, 0.8, 0.4, 1, 1, 0.4, 0.4, 0.8, 0.4, 0.8, 0.4, 0.8, 0.6000000000000001, 1, 1, 1, 0.8, 1, 1, 0.4, 0.4, 0.6000000000000001, 0.4, 1, 0.8, 0.8, 0.4, 0.6000000000000001, 0.0, 1] | 0.7370 |
| 0.9565 | 6.0 | 594 | 0.6207 | [0.0, 0.8, 0.0, 1, 0.4, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 1.0, 1, 1, 1, 1, 1.0, 0.6000000000000001, 0.4, 0.2, 0.2, 1, 0.8, 0.0, 0.4, 0.8, 0.6000000000000001, 1, 1, 1, 1, 0.0, 0.0, 0.8, 0.8, 0.4, 1, 1, 0.4, 0.0, 1, 1, 1, 0.4, 1, 1, 0.4, 1, 1, 1, 0.8, 0.8, 1, 0.4, 0.0, 0.8, 0.4, 1, 1, 1, 0.4, 0.4, 0.2, 0.4, 1, 0.8, 0.4, 1, 0.2, 0.0, 1, 1, 1, 1.0, 1, 1, 0.6000000000000001, 0.8, 1, 0.8, 1, 0.0, 1, 0.0, 0.4, 0.8, 0.8, 1, 0.2, 1, 0.0, 1, 0.8, 1, 0.4, 0.8, 0.4, 1, 0.8, 1.0, 0.0, 1, 0.4, 0.8, 1, 1, 1, 0.0, 0.2, 1, 1, 0.8, 1, 0.8, 1, 0.4, 0.4, 1, 1, 0.4, 0.8, 0.6000000000000001, 0.4, 0.6000000000000001, 0.2, 1.0, 0.4, 0.8, 0.8, 1, 0.8, 0.8, 1.0, 1, 0.8, 0.8, 1, 1, 1, 0.4, 0.4, 0.8, 0.0, 0.2, 0.8, 0.8, 0.6000000000000001, 0.8, 1, 1, 0.4, 0.8, 1, 0.6000000000000001, 0.8, 0.8, 1, 1, 1, 1, 1.0, 0.4, 1, 0.4, 1, 0.4, 0.4, 0.8, 0.8, 0.4, 1, 0.4, 1, 0.2, 1, 1.0, 0.8, 1.0, 0.4, 0.4, 0.4, 1, 1.0, 1, 0.4, 1, 1, 0.2, 0.4, 0.6000000000000001, 0.4, 0.8, 0.2, 0.8, 1, 1, 0.2, 0.8, 1, 1, 0.2, 0.8, 0.2, 0.4, 1, 0.4, 0.4, 1, 0.8, 0.2, 0.6000000000000001, 1, 0.8, 0.4, 0.8, 0.8, 1.0, 0.8, 1, 0.8, 1, 1, 0.8, 0.4, 0.4, 0.8, 0.8, 0.2, 0.2, 0.8, 0.8, 1, 0.8, 1, 1, 0.4, 0.4, 0.6000000000000001, 1, 1, 0.8, 0.8, 1.0, 1, 1, 0.8, 0.8, 0.4, 1, 0.4, 1, 1, 0.8, 1, 1, 0.8, 0.8, 0.0, 0.4, 1, 1, 1.0, 1, 0.8, 0.4, 1, 0.6000000000000001, 1, 0.4, 1, 1, 0.8, 0.8, 0.6000000000000001, 1, 0.8, 0.2, 0.0, 0.6000000000000001, 0.2, 0.8, 0.6000000000000001, 0.8, 0.8, 1.0, 0.2, 0.4, 1.0, 0.8, 1, 0.6000000000000001, 0.4, 1, 0.4, 0.4, 1, 1, 1, 1, 1, 0.8, 1, 0.2, 0.0, 0.8, 1.0, 0.8, 0.0, 0.8, 0.4, 1, 0.0, 0.6000000000000001, 0.6000000000000001, 1, 1, 0.4, 0.6000000000000001, 1, 1, 0.8, 0.8, 0.0, 0.4, 0.8, 1, 0.4, 0.4, 0.6000000000000001, 0.0, 0.8, 0.4, 0.8, 0.4, 1, 1, 0.8, 0.4, 1, 0.4, 1, 0.4, 0.8, 0.6000000000000001, 1, 1, 1, 0.8, 1, 1, 0.8, 0.4, 0.6000000000000001, 0.4, 1, 0.8, 0.8, 0.4, 0.6000000000000001, 0.0, 1] | 0.7070 |
| 0.8479 | 7.0 | 693 | 0.5786 | [0.0, 1, 0.0, 0.4, 0.8, 1.0, 0.8, 1, 0.6000000000000001, 1.0, 1, 0.4, 1, 1, 1, 0.6000000000000001, 0.4, 0.2, 0.2, 1, 0.2, 0.0, 0.0, 0.8, 0.6000000000000001, 1, 1, 1, 1, 0.0, 0.0, 0.8, 0.8, 0.4, 1, 1, 1, 0.0, 1, 1, 1, 0.4, 1, 0.4, 0.0, 1, 1, 1, 1, 0.6000000000000001, 1, 0.0, 0.0, 1, 0.0, 1, 1, 1, 0.0, 0.4, 0.2, 0.4, 1, 1, 1, 1, 0.2, 0.0, 1, 1, 1, 1.0, 1, 1, 0.6000000000000001, 0.8, 1, 1, 1, 0.0, 1, 0.0, 0.0, 0.8, 0.2, 1, 0.2, 1.0, 0.4, 1, 0.8, 1, 0.0, 1, 0.0, 1, 0.8, 1.0, 0.0, 1, 1, 0.2, 1, 1, 1, 0.0, 0.2, 1, 1, 0.8, 1, 0.2, 1, 0.4, 0.8, 1, 1, 1, 1, 1.0, 0.4, 0.6000000000000001, 0.2, 1, 0.8, 0.8, 1, 1, 1, 1, 1.0, 1, 0.2, 0.8, 1, 1, 1, 0.4, 0.4, 0.8, 0.0, 0.2, 1, 1, 0.4, 1, 1, 1, 0.4, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 0.4, 1, 0.4, 0.0, 1, 0.8, 0.4, 1, 0.4, 1, 0.2, 1, 1.0, 1, 1.0, 1, 0.0, 0.8, 1, 1.0, 1, 0.4, 1, 1, 0.2, 0.8, 0.6000000000000001, 0.4, 1, 0.2, 0.2, 1, 1, 0.2, 1, 1, 1, 0.2, 1, 0.2, 0.4, 1, 0.4, 1, 1.0, 1, 0.2, 0.6000000000000001, 1.0, 0.8, 0.4, 1, 0.6000000000000001, 1.0, 0.8, 1, 1, 1, 1, 1, 0.4, 0.4, 1, 1, 0.2, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.0, 0.6000000000000001, 1, 1, 1, 1, 0.8, 1, 1, 0.4, 0.8, 1, 1, 0.4, 1, 1, 0.6000000000000001, 1, 1, 1, 1, 0.0, 0.4, 1, 1, 1.0, 1, 1, 1, 1, 0.6000000000000001, 1, 0.8, 1, 1, 0.8, 1, 0.6000000000000001, 1, 0.2, 0.2, 0.0, 1, 0.2, 1, 0.6000000000000001, 1, 0.8, 1, 0.2, 0.8, 1, 0.2, 1, 0.6000000000000001, 0.0, 1, 0.8, 0.0, 1, 1, 1, 1, 1, 0.8, 1, 0.2, 0.4, 0.8, 1, 0.2, 0.0, 0.8, 0.0, 1, 0.0, 0.6000000000000001, 0.6000000000000001, 1, 1, 1, 0.6000000000000001, 1, 1.0, 1, 0.8, 0.0, 0.4, 1, 1, 0.4, 0.4, 0.6000000000000001, 0.0, 1, 0.0, 1, 0.8, 1, 1, 0.8, 0.8, 1, 0.0, 1, 0.8, 1, 1, 1, 0.4, 1, 1, 1, 1, 0.8, 0.0, 0.4, 0.0, 1.0, 0.2, 1, 0.4, 0.6000000000000001, 0.4, 1] | 0.7231 |
| 0.7582 | 8.0 | 792 | 0.6239 | [0.4, 1, 0.4, 1, 1, 0.6000000000000001, 0.8, 1, 0.6000000000000001, 1.0, 1.0, 0.4, 1, 0.8, 1, 1, 0.4, 0.8, 0.8, 1, 0.8, 1, 1, 1, 1, 1, 0.8, 1, 1, 0.4, 1, 0.8, 1, 0.4, 1, 1.0, 1, 0.0, 1, 0.8, 0.8, 0.4, 1, 1, 0.4, 1, 1, 1, 1, 0.6000000000000001, 1, 0.4, 0.4, 1, 0.4, 1, 1, 1, 0.4, 0.4, 0.2, 0.4, 0.8, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 0.6000000000000001, 0.8, 1, 0.8, 1, 0.0, 1, 0.0, 0.4, 0.8, 0.8, 0.8, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1, 1.0, 0.4, 1, 1, 0.2, 1, 0.8, 1, 0.0, 0.8, 1, 1, 1, 1, 0.2, 1, 0.4, 0.4, 1, 1, 1, 1, 1, 0.4, 1, 1, 1.0, 1, 0.8, 1, 1, 1, 0.8, 1.0, 1, 1, 0.8, 1, 1, 1, 1, 0.4, 1, 0.4, 0.2, 1, 1, 0.6000000000000001, 1, 1, 1, 1, 0.8, 1, 1, 1, 1, 0.8, 1, 1, 1, 0.6000000000000001, 1, 0.8, 1, 1, 0.6000000000000001, 1, 1, 1, 0.4, 1, 1, 1, 0.2, 1, 1.0, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 0.8, 1, 0.2, 0.4, 1, 1, 0.8, 1, 0.8, 1, 1, 0.8, 1, 1, 1, 0.8, 1, 1.0, 1, 1, 1, 0.4, 1, 1, 0.2, 0.8, 1.0, 0.8, 0.4, 1, 0.6000000000000001, 1.0, 0.8, 0.8, 1, 1, 0.8, 1, 0.4, 0.4, 0.8, 1, 0.2, 1, 1, 1, 1, 1, 1, 1, 0.4, 1, 0.6000000000000001, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.8, 1, 1, 1, 1, 1, 0.8, 1, 1, 0.8, 1, 0.0, 1, 1, 1, 1.0, 1, 1, 1, 1, 0.6000000000000001, 1, 1, 1, 1, 0.8, 1, 0.6000000000000001, 1, 1, 1, 1, 0.6000000000000001, 0.8, 0.8, 0.6000000000000001, 1, 0.4, 1, 0.8, 0.4, 1, 0.8, 1, 0.6000000000000001, 0.4, 1, 0.4, 1, 1, 1, 1, 1, 0.8, 0.4, 1, 1, 1, 0.8, 1, 1, 0.0, 0.4, 1, 1, 0.0, 0.6000000000000001, 0.8, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.4, 0.4, 1, 1, 1, 1, 1, 1, 1, 1, 0.8, 0.4, 1, 1, 0.4, 0.8, 1, 0.0, 0.8, 0.4, 0.8, 1, 1, 1, 0.8, 1, 1, 1, 0.8, 0.4, 0.6000000000000001, 0.4, 1.0, 1, 0.8, 1, 0.6000000000000001, 0.0, 1] | 0.8384 |
| 0.7579 | 9.0 | 891 | 0.5293 | [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | 0.9972 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.6.1
- Tokenizers 0.14.1
|
glif-loradex-trainer/i12bp8_appelsiensam_instructionsmeme_v1 | glif-loradex-trainer | "2024-11-08T10:31:54Z" | 16 | 3 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2024-11-08T10:31:21Z" | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1731061820071__000003000_0.jpg
text: '1NSTRUCT10NM3M3 Instructional diagram in black and white line art style,
safety manual aesthetic. Title banner reads ''Instructions: Testing Baby''s
Bottle''. Split screen layout. Left side labeled ''DO'': testing milk temperature
with drops on wrist. Right side labeled ''DON''T'': drinking directly from
baby''s bottle. Red warning triangle in corner. Minimalist line drawing style.
White background.'
- output:
url: samples/1731061844748__000003000_1.jpg
text: '1NSTRUCT10NM3M3 Instructional diagram in black and white line art style,
safety manual aesthetic. Title banner reads ''Instructions: Drying Baby''.
Split screen layout. Left side labeled ''YES'': gently drying baby with towel
by hand. Right side labeled ''NO'': person attempting to use clothes dryer.
Red warning triangle in corner. Minimalist line drawing style. White background.'
- output:
url: samples/1731061869418__000003000_2.jpg
text: '1NSTRUCT10NM3M3 Instructional diagram in black and white line art style,
safety manual aesthetic. Title banner reads ''Instructions: Introducing Baby
to Pets''. Split screen layout. Left side labeled ''SAFE'': parent carefully
holding baby while introducing to calm dog at safe distance. Right side labeled
''UNSAFE'': baby placed directly against fish tank with aquarium decorations.
Red warning triangle in corner. Minimalist line drawing style. White background.'
base_model: black-forest-labs/FLUX.1-dev
trigger: 1NSTRUCT10NM3M3
instance_prompt: 1NSTRUCT10NM3M3
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# appelsiensam_instructionsmeme_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `i12bp8`.
<Gallery />
## Trigger words
You should use `1NSTRUCT10NM3M3` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/i12bp8_appelsiensam_instructionsmeme_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
isspek/roberta-base_ebola_zika_covid_monkeypox_1_2e-5_16 | isspek | "2025-03-03T20:04:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-03T20:04:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
scvi-tools/tabula-sapiens-bone_marrow-scvi | scvi-tools | "2024-12-08T09:46:39Z" | 0 | 0 | scvi-tools | [
"scvi-tools",
"tensorboard",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.2.0",
"anndata_version:0.11.1",
"modality:rna",
"tissue:various",
"annotated:True",
"license:cc-by-4.0",
"region:us"
] | null | "2023-03-15T19:09:06Z" | ---
library_name: scvi-tools
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.2.0
- anndata_version:0.11.1
- modality:rna
- tissue:various
- annotated:True
---
ScVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space, integrate technical batches and impute dropouts.
The learned low-dimensional latent representation of the data can be used for visualization and
clustering.
scVI takes as input a scRNA-seq gene expression matrix with cells and genes.
We provide an extensive [user guide](https://docs.scvi-tools.org/en/1.2.0/user_guide/models/scvi.html).
- See our original manuscript for further details of the model:
[scVI manuscript](https://www.nature.com/articles/s41592-018-0229-2).
- See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how
to leverage pre-trained models.
This model can be used for fine tuning on new data using our Arches framework:
[Arches tutorial](https://docs.scvi-tools.org/en/1.0.0/tutorials/notebooks/scarches_scvi_tools.html).
# Model Description
Tabula Sapiens is a benchmark, first-draft human cell atlas of nearly 500,000 cells from 24 organs of 15 normal human subjects.
# Metrics
We provide here key performance metrics for the uploaded model, if provided by the data uploader.
<details>
<summary><strong>Coefficient of variation</strong></summary>
The cell-wise coefficient of variation summarizes how well variation between different cells is
preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4
, we would recommend not to use generated data for downstream analysis, while the generated latent
space might still be useful for analysis.
**Cell-wise Coefficient of Variation**:
| Metric | Training Value | Validation Value |
|-------------------------|----------------|------------------|
| Mean Absolute Error | 2.37 | 2.42 |
| Pearson Correlation | 0.84 | 0.82 |
| Spearman Correlation | 0.86 | 0.84 |
| R² (R-Squared) | 0.53 | 0.50 |
The gene-wise coefficient of variation summarizes how well variation between different genes is
preserved by the generated model expression. This value is usually quite high.
**Gene-wise Coefficient of Variation**:
| Metric | Training Value |
|-------------------------|----------------|
| Mean Absolute Error | 13.41 |
| Pearson Correlation | 0.59 |
| Spearman Correlation | 0.66 |
| R² (R-Squared) | -1.47 |
</details>
<details>
<summary><strong>Differential expression metric</strong></summary>
The differential expression metric provides a summary of the differential expression analysis
between cell types or input clusters. We provide here the F1-score, Pearson Correlation
Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision
Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each
cell-type.
**Differential expression**:
| Index | gene_f1 | lfc_mae | lfc_pearson | lfc_spearman | roc_auc | pr_auc | n_cells |
| --- | --- | --- | --- | --- | --- | --- | --- |
| neutrophil | 0.96 | 2.20 | 0.65 | 0.88 | 0.18 | 0.84 | 2911.00 |
| CD4-positive, alpha-beta T cell | 0.95 | 1.96 | 0.62 | 0.90 | 0.34 | 0.79 | 2025.00 |
| monocyte | 0.87 | 1.71 | 0.68 | 0.88 | 0.40 | 0.77 | 1389.00 |
| CD8-positive, alpha-beta T cell | 0.86 | 2.96 | 0.58 | 0.85 | 0.32 | 0.74 | 1147.00 |
| granulocyte | 0.74 | 2.52 | 0.62 | 0.88 | 0.46 | 0.83 | 853.00 |
| plasma cell | 0.77 | 2.35 | 0.70 | 0.91 | 0.19 | 0.85 | 825.00 |
| erythroid progenitor cell | 0.61 | 2.34 | 0.70 | 0.91 | 0.49 | 0.89 | 757.00 |
| mature NK T cell | 0.81 | 3.70 | 0.58 | 0.78 | 0.33 | 0.69 | 678.00 |
| hematopoietic stem cell | 0.91 | 2.30 | 0.64 | 0.88 | 0.53 | 0.85 | 617.00 |
| memory B cell | 0.88 | 4.38 | 0.58 | 0.69 | 0.33 | 0.70 | 310.00 |
| common myeloid progenitor | 0.71 | 2.65 | 0.70 | 0.89 | 0.55 | 0.88 | 287.00 |
| macrophage | 0.86 | 4.46 | 0.63 | 0.70 | 0.34 | 0.78 | 265.00 |
| naive B cell | 0.91 | 5.27 | 0.60 | 0.67 | 0.26 | 0.66 | 142.00 |
| erythrocyte | 0.92 | 5.10 | 0.55 | 0.50 | 0.32 | 0.90 | 87.00 |
</details>
# Model Properties
We provide here key parameters used to setup and train the model.
<details>
<summary><strong>Model Parameters</strong></summary>
These provide the settings to setup the original model:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
</details>
<details>
<summary><strong>Setup Data Arguments</strong></summary>
Arguments passed to setup_anndata of the original model:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
</details>
<details>
<summary><strong>Data Registry</strong></summary>
Registry elements for AnnData manager:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['scvi_latent_qzm'] |
| latent_qzv | adata.obsm['scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['observed_lib_size'] |
- **Data is Minified**: False
</details>
<details>
<summary><strong>Summary Statistics</strong></summary>
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 7 |
| n_cells | 12293 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 14 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 3000 |
</details>
<details>
<summary><strong>Training</strong></summary>
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the
scvi-tools documentation for details. -->
**Training data url**: Not provided by uploader
If provided by the original uploader, for those interested in understanding or replicating the
training process, the code is available at the link below.
**Training Code URL**: https://github.com/YosefLab/scvi-hub-models/blob/main/src/scvi_hub_models/TS_train_all_tissues.ipynb
</details>
# References
The Tabula Sapiens Consortium. The Tabula Sapiens: A multiple-organ, single-cell transcriptomic atlas of humans. Science, May 2022. doi:10.1126/science.abl4896
|
Zetatech/pvt-tiny-224 | Zetatech | "2023-09-12T04:51:39Z" | 1,713 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"pvt",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2102.12122",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-24T00:53:31Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Pyramid Vision Transformer (tiny-sized model)
Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT).
Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S. [@Xrenya]](https://huggingface.co/Xrenya).
## Model description
The Pyramid Vision Transformer (PVT) is a transformer encoder model (BERT-like) pretrained on ImageNet-1k (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of variable-size patches, which are linearly embedded. Unlike ViT models, PVT is using a progressive shrinking pyramid to reduce computations of large feature maps at each stage. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/Xrenya) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PvtImageProcessor, PvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = PvtImageProcessor.from_pretrained('Zetatech/pvt-tiny-224')
model = PvtForImageClassification.from_pretrained('Zetatech/pvt-tiny-224')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/pvt.html#).
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/whai362/PVT/blob/v2/classification/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2021pyramid,
title={Pyramid vision transformer: A versatile backbone for dense prediction without convolutions},
author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={568--578},
year={2021}
}
```
|
sumangpt/adapter | sumangpt | "2024-01-13T15:29:41Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | "2024-01-13T14:38:34Z" | ---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sail-rvc/Frieza-ENG | sail-rvc | "2023-07-14T07:23:05Z" | 3 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:22:26Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Frieza-ENG
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:23:05
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
silver18723/ppo-LunarLander-v2 | silver18723 | "2023-05-22T09:36:35Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T09:06:04Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.29 +/- 21.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nomsgadded/pokemon-lora | nomsgadded | "2023-07-11T05:25:03Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-07-11T03:46:05Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - nomsgadded/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
ananthu-aniraj/pdiscoformer_nabirds_k_4 | ananthu-aniraj | "2024-09-25T09:23:54Z" | 10 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"image-classification",
"en",
"arxiv:2407.04538",
"base_model:timm/vit_base_patch14_reg4_dinov2.lvd142m",
"base_model:finetune:timm/vit_base_patch14_reg4_dinov2.lvd142m",
"license:mit",
"region:us"
] | image-classification | "2024-09-25T09:12:40Z" | ---
pipeline_tag: image-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- image-classification
license: mit
language:
- en
base_model:
- timm/vit_base_patch14_reg4_dinov2.lvd142m
---
# PdiscoFormer NABirds Model (K=4)
PdiscoFormer (Vit-base-dinov2-reg4) trained on NABirds with K (number of unsupervised parts to discover) set to a value of 4.
PdiscoFormer is a novel method for unsupervised part discovery using self-supervised Vision Transformers which achieves state-of-the-art results for this task, both qualitatively and quantitatively. The code can be found in the following repository: https://github.com/ananthu-aniraj/pdiscoformer
# BibTex entry and citation info
```
@misc{aniraj2024pdiscoformerrelaxingdiscoveryconstraints,
title={PDiscoFormer: Relaxing Part Discovery Constraints with Vision Transformers},
author={Ananthu Aniraj and Cassio F. Dantas and Dino Ienco and Diego Marcos},
year={2024},
eprint={2407.04538},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.04538},
}] |
Setpember/bert-medium_lora_r16_epsilon100 | Setpember | "2025-03-21T21:10:26Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:prajjwal1/bert-medium",
"base_model:adapter:prajjwal1/bert-medium",
"region:us"
] | null | "2025-03-21T17:54:21Z" | ---
base_model: prajjwal1/bert-medium
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
le0l3wis/distilbert-base-uncased-finetuned-emotion | le0l3wis | "2025-02-20T02:04:58Z" | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-17T00:25:56Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8329 | 1.0 | 250 | 0.3144 | 0.909 | 0.9080 |
| 0.2523 | 2.0 | 500 | 0.2135 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0
|
Sumail/Goat_Derrick06 | Sumail | "2024-03-30T00:43:32Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:coffiee/s17",
"base_model:merge:coffiee/s17",
"base_model:zzttbrdd/sn6_06s",
"base_model:merge:zzttbrdd/sn6_06s",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-30T00:41:52Z" | ---
base_model:
- zzttbrdd/sn6_06s
- coffiee/s17
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zzttbrdd/sn6_06s](https://huggingface.co/zzttbrdd/sn6_06s)
* [coffiee/s17](https://huggingface.co/coffiee/s17)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: coffiee/s17
layer_range: [0, 24]
- model: zzttbrdd/sn6_06s
layer_range: [0, 24]
merge_method: slerp
base_model: coffiee/s17
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
miexue/lora_model | miexue | "2025-03-12T06:26:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-12T06:25:53Z" | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** miexue
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lichorosario/flux-RealismLora | lichorosario | "2024-09-12T03:39:56Z" | 477 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:other",
"region:us"
] | text-to-image | "2024-09-12T03:35:42Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: black-forest-labs/FLUX.1-schnell
pipeline_tag: text-to-image
instance_prompt:
library_name: diffusers
inference:
parameters:
width: 1024
height: 1024
widget:
- text: >-
RTMI style. Guybrush threepwood, a tall man with blonde hair, wearing a
blue pirate coat with gold accents. He has a white shirt underneath, a belt
with a gold buckle, and dark pants. His expression is thoughtful, and he has
a slight stubble on his face, adding to his adventurous appearance. Guybrush
is programming with many computers. Cyberpunk style.
example_title: Guybrush programmer
output:
url: samples/guybrush-programmer.png
---
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lichorosario/flux-lora-rtmi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
<Gallery />
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) |
fowlart99/hihi | fowlart99 | "2023-09-23T18:29:49Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-09-23T18:29:49Z" | ---
license: bigscience-openrail-m
---
|
MaadTechnologies/MoonLander | MaadTechnologies | "2024-05-23T11:05:30Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-23T10:56:40Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.00 +/- 45.57
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
guydebruyn/ppo-CartPole-v2 | guydebruyn | "2023-09-20T16:03:06Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-20T16:03:01Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -165.83 +/- 64.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'guydebruyn/ppo-CartPole-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Felladrin/gguf-Qwen2-0.5B-Instruct-llamafy | Felladrin | "2024-06-27T12:07:07Z" | 13 | 0 | null | [
"gguf",
"base_model:Minami-su/Qwen2-0.5B-Instruct-llamafy",
"base_model:quantized:Minami-su/Qwen2-0.5B-Instruct-llamafy",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-27T11:42:24Z" | ---
license: apache-2.0
base_model: Minami-su/Qwen2-0.5B-Instruct-llamafy
---
GGUF version of [Minami-su/Qwen2-0.5B-Instruct-llamafy](https://huggingface.co/Minami-su/Qwen2-0.5B-Instruct-llamafy).
|
BeepBoopBox/64aead | BeepBoopBox | "2025-02-10T18:37:29Z" | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-10T18:36:22Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DevQuasar/OpenSafetyLab.MD-Judge-v0_2-internlm2_7b-GGUF | DevQuasar | "2025-02-01T23:03:03Z" | 28 | 0 | null | [
"gguf",
"text-generation",
"base_model:OpenSafetyLab/MD-Judge-v0_2-internlm2_7b",
"base_model:quantized:OpenSafetyLab/MD-Judge-v0_2-internlm2_7b",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-10-20T02:40:25Z" | ---
base_model:
- OpenSafetyLab/MD-Judge-v0_2-internlm2_7b
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [OpenSafetyLab/MD-Judge-v0_2-internlm2_7b](https://huggingface.co/OpenSafetyLab/MD-Judge-v0_2-internlm2_7b)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Timiry/autoencoder_fashion_mnist | Timiry | "2023-06-20T18:29:19Z" | 0 | 0 | tf-keras | [
"tf-keras",
"region:us"
] | null | "2023-06-20T14:21:54Z" | # Итоговое задание вариант №7
# Задача
Дан датасет fashion_mnist постройте автоэнкодер принимающий на вход изображение
предмета и создающий его же изображение на выходе.
# Архитектура нейронной сети

# Количество обучаемых параметров

# Использовались
1. Алгоритм оптимизации: Adam
2. Функция ошибки: среднеквадратичное отклонение
# Размеры выборок
1. Тренировочная: 48000
2. Валидационная: 12000
3. Тестовая: 10000
# Результаты обучения модели
 |
backyardai/Llama-3.3-70B-Instruct-GGUF | backyardai | "2025-02-21T10:14:48Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"en",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"de",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-21T08:11:50Z" | ---
base_model: meta-llama/Llama-3.3-70B-Instruct
language:
- en
- fr
- it
- pt
- hi
- es
- th
- de
library_name: transformers
license: llama3.3
model_name: Llama-3.3-70B-Instruct-GGUF
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.3 COMMUNITY LICENSE AGREEMENT\nLlama 3.3 Version\
\ Release Date: December 6, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.3 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview).\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.3\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads).\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Llama 3.3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
By clicking “I Accept” below or by using or distributing any portion or element\
\ of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights\
\ and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide,\
\ non-transferable and royalty-free limited license under Meta’s intellectual property\
\ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\
\ distribute, copy, create derivative works of, and make modifications to the Llama\
\ Materials.\nb. Redistribution and Use.\ni. If you distribute or make available\
\ the Llama Materials (or any derivative works thereof), or a product or service\
\ (including another AI model) that contains any of them, you shall (A) provide\
\ a copy of this Agreement with any such Llama Materials; and (B) prominently display\
\ “Built with Llama” on a related website, user interface, blogpost, about page,\
\ or product documentation. If you use the Llama Materials or any outputs or results\
\ of the Llama Materials to create, train, fine tune, or otherwise improve an AI\
\ model, which is distributed or made available, you shall also include “Llama”\
\ at the beginning of any such AI model name.\nii. If you receive Llama Materials,\
\ or any derivative works thereof, from a Licensee as part of an integrated end\
\ user product, then Section 2 of this Agreement will not apply to you. \niii. You\
\ must retain in all copies of the Llama Materials that you distribute the following\
\ attribution notice within a “Notice” text file distributed as a part of such copies:\
\ “Llama 3.3 is licensed under the Llama 3.3 Community License, Copyright © Meta\
\ Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must\
\ comply with applicable laws and regulations (including trade compliance laws and\
\ regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available\
\ at [https://www.llama.com/llama3\\_3/use-policy](https://www.llama.com/llama3_3/use-policy)),\
\ which is hereby incorporated by reference into this Agreement. \n2. Additional\
\ Commercial Terms. If, on the Llama 3.3 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/).\
\ All goodwill arising out of your use of the Mark will inure to the benefit of\
\ Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.3 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.3 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.3. If you access or use Llama 3.3, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3\\\
_3/use-policy](https://www.llama.com/llama3_3/use-policy).\nProhibited Uses\nWe\
\ want everyone to use Llama 3.3 safely and responsibly. You agree you will not\
\ use, or allow others to use, Llama 3.3 to:\n1. Violate the law or others’ rights,\
\ including to:\n\n 1. Engage in, promote, generate, contribute to, encourage,\
\ plan, incite, or further illegal or unlawful activity or content, such as: \n\
\ 1. Violence or terrorism \n 2. Exploitation or harm to children, including\
\ the solicitation, creation, acquisition, or dissemination of child exploitative\
\ content or failure to report Child Sexual Abuse Material \n 3. Human trafficking,\
\ exploitation, and sexual violence \n 4. The illegal distribution of information\
\ or materials to minors, including obscene materials, or failure to employ legally\
\ required age-gating in connection with such information or materials. \n \
\ 5. Sexual solicitation \n 6. Any other criminal activity\n\n 2. Engage\
\ in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying\
\ of individuals or groups of individuals\n\n 3. Engage in, promote, incite, or\
\ facilitate discrimination or other unlawful or harmful conduct in the provision\
\ of employment, employment benefits, credit, housing, other economic benefits,\
\ or other essential goods and services\n\n 4. Engage in the unauthorized or unlicensed\
\ practice of any profession including, but not limited to, financial, legal, medical/health,\
\ or related professional practices\n\n 5. Collect, process, disclose, generate,\
\ or infer private or sensitive information about individuals, including information\
\ about individuals’ identity, health, or demographic information, unless you have\
\ obtained the right to do so in accordance with applicable law\n\n 6. Engage\
\ in or facilitate any action or generate any content that infringes, misappropriates,\
\ or otherwise violates any third-party rights, including the outputs or results\
\ of any products or services using the Llama Materials\n\n 7. Create, generate,\
\ or facilitate the creation of malicious code, malware, computer viruses or do\
\ anything else that could disable, overburden, interfere with or impair the proper\
\ working, integrity, operation or appearance of a website or computer system\n\n\
\ 8. Engage in any action, or facilitate any action, to intentionally circumvent\
\ or remove usage restrictions or other safety measures, or to enable functionality\
\ disabled by Meta\n\n2. Engage in, promote, incite, facilitate, or assist in the\
\ planning or development of activities that present a risk of death or bodily harm\
\ to individuals, including use of Llama 3.3 related to the following:\n\n 1.\
\ Military, warfare, nuclear industries or applications, espionage, use for materials\
\ or activities that are subject to the International Traffic Arms Regulations (ITAR)\
\ maintained by the United States Department of State or to the U.S. Biological\
\ Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation\
\ Act of 1997\n\n 2. Guns and illegal weapons (including weapon development)\n\
\n 3. Illegal drugs and regulated/controlled substances\n\n 4. Operation of\
\ critical infrastructure, transportation technologies, or heavy machinery\n\n \
\ 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n\
\n 6. Any content intended to incite or promote violence, abuse, or any infliction\
\ of bodily harm to an individual\n\n3. Intentionally deceive or mislead others,\
\ including use of Llama 3.3 related to the following:\n\n 1. Generating, promoting,\
\ or furthering fraud or the creation or promotion of disinformation\n\n 2. Generating,\
\ promoting, or furthering defamatory content, including the creation of defamatory\
\ statements, images, or other content\n\n 3. Generating, promoting, or further\
\ distributing spam\n\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n\n 5. Representing that the use of Llama 3.3 or outputs are\
\ human-generated\n\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\n5. Interact with third\
\ party tools, models, or software designed to generate unlawful content or engage\
\ in unlawful or harmful conduct and/or represent that the outputs of such tools,\
\ models, or software are associated with Meta or Llama 3.3\nWith respect to any\
\ multimodal models included in Llama 3.3, the rights granted under Section 1(a)\
\ of the Llama 3.3 Community License Agreement are not being granted to you if you\
\ are an individual domiciled in, or a company with a principal place of business\
\ in, the European Union. This restriction does not apply to end users of a product\
\ or service that incorporates any such multimodal models.\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n* Reporting issues with the\
\ model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\
\ * Reporting risky content generated by the model: [developers.facebook.com/llama\\\
_output\\_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting\
\ bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\
\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.3: [email protected] "
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
quantized_by: brooketh
parameter_count: 70553706560
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Llama 3.3 Instruct 70B
- **Creator:** [meta-llama](https://huggingface.co/meta-llama/)
- **Original:** [Llama 3.3 Instruct 70B](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)
- **Date Created:** 2024-11-26
- **Trained Context:** 131072 tokens
- **Description:** The original text-only instruct-tuned model from Meta.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
ajtorek/electra-wac-babylm-False-key-second | ajtorek | "2025-04-09T00:36:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-09T00:35:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SuperFocus/SuFin_QA_mnr2_multi_QA_75_E_d_5 | SuperFocus | "2023-04-04T20:05:23Z" | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-04T20:05:06Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 30270 with parameters:
```
{'batch_size': 4}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3027,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pfunk/Pong-v4-DQPN_p100_e0.50-seed1 | pfunk | "2023-02-10T00:43:15Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-10T00:42:53Z" | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 7.40 +/- 5.30
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_e0.50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100_e0.50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100_e0.50 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.50-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100_e0.50 --start-policy-f 100000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.5,
'exp_name': 'DQPN_p100_e0.50',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
gokuls/distilbert_sa_GLUE_Experiment_qnli_96 | gokuls | "2023-01-25T04:51:51Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-01-25T04:39:35Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_qnli_96
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.604978949295259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_qnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6582
- Accuracy: 0.6050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6858 | 1.0 | 410 | 0.6653 | 0.6013 |
| 0.658 | 2.0 | 820 | 0.6582 | 0.6050 |
| 0.6395 | 3.0 | 1230 | 0.6607 | 0.6022 |
| 0.6229 | 4.0 | 1640 | 0.6699 | 0.6000 |
| 0.6087 | 5.0 | 2050 | 0.6770 | 0.5929 |
| 0.5946 | 6.0 | 2460 | 0.6980 | 0.5951 |
| 0.581 | 7.0 | 2870 | 0.7427 | 0.5854 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AOLCDROM/YourTTS-Fr-En-De-Es | AOLCDROM | "2023-08-24T18:41:47Z" | 5 | 3 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2023-06-30T18:38:39Z" | Multispeaker, Multilingual YourTTS model trained using Coqui TTS
Trained languages: English, German, Spanish, French
Multiple speakers in all languages, see breakdown for speaker:language training pairs
Note: this was trained using the default YourTTS character set, with minimal text normalizers/cleaners, as they do more harm than good with my datasets.
Voices can be used in other trained languages, so accented speech will carry across speakers/languages
(Ex. make French voice speak in English for French accented English)
Trained on: LJSpeech 1.1 (English), CSS10 (Spanish, German, French), VCTK (English)
Voices/Languages:
en-us
'VCTK_johnw': 0, 'VCTK_lah': 1, 'VCTK_ljs': 2, 'VCTK_p294': 3, 'VCTK_p297': 4, 'VCTK_p299': 5, 'VCTK_p300': 6, 'VCTK_p301': 7, 'VCTK_p305': 8,
'VCTK_p306': 9, 'VCTK_p308': 10, 'VCTK_p310': 11, 'VCTK_p311': 12, 'VCTK_p318': 13, 'VCTK_p329': 14, 'VCTK_p330': 15, 'VCTK_p333': 16,
'VCTK_p334': 17, 'VCTK_p339': 18, 'VCTK_p341': 19, 'VCTK_p345': 20, 'VCTK_p360': 21, 'VCTK_p361': 22, 'VCTK_tomh': 23,
es-mx
'VCTK_m1': 24, 'VCTK_m2': 25, 'VCTK_m3': 26, 'VCTK_m4': 27, 'VCTK_m5': 28, 'VCTK_m6': 29,
es
'VCTK_es1': 30, 'VCTK_es2': 31,
en-gb
'VCTK_ruthg': 34,
de
'VCTK_evak': 35, 'VCTK_hok': 36,
fr
'VCTK_bern': 37, 'VCTK_gilles': 38,
en-in
'VCTK_p248': 39, 'VCTK_p251': 40, 'VCTK_p376': 41,
en-ir
'VCTK_p238': 42, 'VCTK_p245': 43, 'VCTK_p261': 44, 'VCTK_p266': 45, 'VCTK_p283': 46, 'VCTK_p288': 47, 'VCTK_p292': 48, 'VCTK_p293': 49,
'VCTK_p298': 50, 'VCTK_p313': 51, 'VCTK_p340': 52, 'VCTK_p351': 53, 'VCTK_p364': 54,
en-gb-s
'VCTK_p225': 55, 'VCTK_p226': 56, 'VCTK_p228': 57, 'VCTK_p229': 58, 'VCTK_p231': 59, 'VCTK_p232': 60, 'VCTK_p239': 61, 'VCTK_p240': 62,
'VCTK_p250': 63, 'VCTK_p254': 64, 'VCTK_p256': 65, 'VCTK_p257': 66, 'VCTK_p258': 67, 'VCTK_p268': 68}
---
license: unknown
I'm not a lawyer. Don't be an idiot. You are responsible for your own adult actions. Presumably the licensing follows the dataset licensing, depending on who you ask. Argue amongst yourselves, I'll be over here on the computer.
---
|
secmlr/rz_simplier_reasoning_VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5_sft | secmlr | "2025-03-13T02:12:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:secmlr/VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5",
"base_model:finetune:secmlr/VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T22:24:28Z" | ---
library_name: transformers
license: apache-2.0
base_model: secmlr/VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: rz_simplier_reasoning_VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rz_simplier_reasoning_VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5_sft
This model is a fine-tuned version of [secmlr/VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5](https://huggingface.co/secmlr/VD-DS-Clean-8k_VD-DS-Clean-16k_Qwen2.5-7B-Instruct_full_sft_1e-5) on the rz_simplier_reasoning dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MrRobotoAI/209-Q4_K_M-GGUF | MrRobotoAI | "2025-03-13T17:55:59Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/209",
"base_model:quantized:MrRobotoAI/209",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-13T17:55:36Z" | ---
base_model: MrRobotoAI/209
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/209-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/209`](https://huggingface.co/MrRobotoAI/209) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/209) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/209-Q4_K_M-GGUF --hf-file 209-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/209-Q4_K_M-GGUF --hf-file 209-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/209-Q4_K_M-GGUF --hf-file 209-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/209-Q4_K_M-GGUF --hf-file 209-q4_k_m.gguf -c 2048
```
|
andricValdez/roberta-base-finetuned-coling24 | andricValdez | "2024-11-15T22:00:56Z" | 25 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | "2024-11-15T06:01:07Z" | ---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-finetuned-coling24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-coling24
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3002
- Accuracy: 0.9599
- F1: 0.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 9544 | 0.0976 | 0.9654 | 0.9652 |
| 0.0773 | 2.0 | 19088 | 0.1342 | 0.9580 | 0.9575 |
| 0.0773 | 3.0 | 28632 | 0.2332 | 0.9514 | 0.9507 |
| 0.0249 | 4.0 | 38176 | 0.2737 | 0.9566 | 0.9560 |
| 0.0249 | 5.0 | 47720 | 0.3002 | 0.9599 | 0.9594 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
sd-concepts-library/3d-female-cyborgs | sd-concepts-library | "2022-09-17T20:15:59Z" | 0 | 39 | null | [
"license:mit",
"region:us"
] | null | "2022-09-17T20:15:45Z" | ---
license: mit
---
### 3d Female Cyborgs on Stable Diffusion
This is the `<A female cyborg>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
AlexanderLab/amtxd | AlexanderLab | "2025-02-26T12:12:25Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-26T10:41:10Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: amtxd
---
# Amtxd
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `amtxd` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlexanderLab/amtxd', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
DevsDoCode/Llama-3-8B-Instruct-1048k | DevsDoCode | "2024-04-30T11:21:40Z" | 10 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2402.08268",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-30T10:52:15Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- meta
- llama-3
license: llama3
---
<a href="https://www.youtube.com/@devsdocode" target="_blank"><img src="https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/AElQ9kmPlaG626QihRBrJ.png" width="200"/></a>
# Llama-3 8B Instruct 1048k
For more info see our [Youtube Channel](https://www.youtube.com/@devsdocode)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Devs Do Code. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
} |
EasierAI/Qwen-2.5-32B | EasierAI | "2025-02-12T16:34:05Z" | 0 | 0 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-02-12T16:27:15Z" | ---
base_model: Qwen/Qwen2.5-32B-Instruct
language:
- en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Qwen2.5-32B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3772">b3772</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## What's new:
Update context length settings and tokenizer
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Qwen2.5-32B-Instruct-f16.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/tree/main/Qwen2.5-32B-Instruct-f16) | f16 | 65.54GB | true | Full F16 weights. |
| [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Qwen2.5-32B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
| [Qwen2.5-32B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |
| [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |
| [Qwen2.5-32B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for must use cases, *recommended*. |
| [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Qwen2.5-32B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Qwen2.5-32B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 18.64GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [Qwen2.5-32B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 18.64GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [Qwen2.5-32B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 18.64GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [Qwen2.5-32B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Qwen2.5-32B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen2.5-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
| [Qwen2.5-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |
| [Qwen2.5-32B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen2.5-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |
| [Qwen2.5-32B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen2.5-32B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Qwen2.5-32B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |
| [Qwen2.5-32B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Qwen2.5-32B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |
| [Qwen2.5-32B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |
| [Qwen2.5-32B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 9.03GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen2.5-32B-Instruct-GGUF --include "Qwen2.5-32B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen2.5-32B-Instruct-GGUF --include "Qwen2.5-32B-Instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Qwen2.5-32B-Instruct-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2 | cleanrl | "2023-03-25T17:55:17Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Frostbite-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-25T17:55:15Z" | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 314.00 +/- 18.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
SolaireOfTheSun/openchat_3.5-EducationAID-Biologie-adapters | SolaireOfTheSun | "2024-03-29T22:57:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-29T22:57:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/mistral-ft-optimized-1227-5.0bpw-h6-exl2 | LoneStriker | "2023-12-30T13:39:30Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-30T13:37:28Z" | ---
license: apache-2.0
---
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).
It is a hierarchichal SLERP merge of teknium/OpenHermes-2.5-Mistral-7B, Intel/neural-chat-7b-v3-3, meta-math/MetaMath-Mistral-7B, and openchat/openchat-3.5-1210. berkeley-nest/Starling-LM-7B-alpha was omitted from this version of the model. |
dathudeptrai/tts-tacotron2-synpaflex-fr | dathudeptrai | "2021-08-12T13:07:20Z" | 0 | 1 | tensorflowtts | [
"tensorflowtts",
"audio",
"text-to-speech",
"text-to-mel",
"fr",
"dataset:synpaflex",
"arxiv:1712.05884",
"arxiv:1710.08969",
"license:apache-2.0",
"region:us"
] | text-to-speech | "2022-03-02T23:29:05Z" | ---
tags:
- tensorflowtts
- audio
- text-to-speech
- text-to-mel
language: fr
license: apache-2.0
datasets:
- synpaflex
widget:
- text: "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis"
---
# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)
This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis"
input_ids = processor.text_to_sequence(text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
```
#### Referencing Tacotron 2
```
@article{DBLP:journals/corr/abs-1712-05884,
author = {Jonathan Shen and
Ruoming Pang and
Ron J. Weiss and
Mike Schuster and
Navdeep Jaitly and
Zongheng Yang and
Zhifeng Chen and
Yu Zhang and
Yuxuan Wang and
R. J. Skerry{-}Ryan and
Rif A. Saurous and
Yannis Agiomyrgiannakis and
Yonghui Wu},
title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions},
journal = {CoRR},
volume = {abs/1712.05884},
year = {2017},
url = {http://arxiv.org/abs/1712.05884},
archivePrefix = {arXiv},
eprint = {1712.05884},
timestamp = {Thu, 28 Nov 2019 08:59:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
``` |
stablediffusionapi/ae-sdxl-v4 | stablediffusionapi | "2024-02-02T21:15:14Z" | 1 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-02-02T21:13:19Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AE-SDXL-v4 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ae-sdxl-v4"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ae-sdxl-v4)
Model link: [View model](https://modelslab.com/models/ae-sdxl-v4)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ae-sdxl-v4",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
dotan1111/BioTokenizer-BFD-BPE-6400 | dotan1111 | "2023-09-13T09:25:48Z" | 0 | 0 | null | [
"biology",
"bioinformatics",
"tokenizers",
"region:us"
] | null | "2023-09-13T09:25:17Z" | ---
tags:
- biology
- bioinformatics
- tokenizers
---
# Effect of Tokenization on Transformers for Biological Sequences
## Abstract:
Deep learning models are transforming biological research. Many bioinformatics and comparative genomics algorithms analyze genomic data, either DNA or protein sequences. Examples include sequence alignments, phylogenetic tree inference and automatic classification of protein functions. Among these deep learning algorithms, models for processing natural languages, developed in the natural language processing (NLP) community, were recently applied to biological sequences. However, biological sequences are different than natural languages, such as English, and French, in which segmentation of the text to separate words is relatively straightforward. Moreover, biological sequences are characterized by extremely long sentences, which hamper their processing by current machine-learning models, notably the transformer architecture. In NLP, one of the first processing steps is to transform the raw text to a list of tokens. Deep-learning applications to biological sequence data mostly segment proteins and DNA to single characters. In this work, we study the effect of alternative tokenization algorithms on eight different tasks in biology, from predicting the function of proteins and their stability, through nucleotide sequence alignment, to classifying proteins to specific families. We demonstrate that applying alternative tokenization algorithms can increase accuracy and at the same time, substantially reduce the input length compared to the trivial tokenizer in which each character is a token. Furthermore, applying these tokenization algorithms allows interpreting trained models, taking into account dependencies among positions. Finally, we trained these tokenizers on a large dataset of protein sequences containing more than 400 billion amino acids, which resulted in over a three-fold decrease in the number of tokens. We then tested these tokenizers trained on large-scale data on the above specific tasks and showed that for some tasks it is highly beneficial to train database-specific tokenizers. Our study suggests that tokenizers are likely to be a critical component in future deep-network analysis of biological sequence data.

Different tokenization algorithms can be applied to biological sequences, as exemplified for the sequence “AAGTCAAGGATC”. (a) The baseline “words” tokenizer assumes a dictionary consisting of the nucleotides: “A”, “C”, “G” and “T”. The length of the encoded sequence is 12, i.e., the number of nucleotides; (b) The “pairs” tokenizer assumes a dictionary consisting of all possible nucleotide pairs. The length of the encoded sequences is typically halved; (c) A sophisticated dictionary consisting of only three tokens: “AAG”, “TC” and “GA”. The encoded sequence for this dictionary contains only five tokens.
## Data:
The "data" folder contains the train, valid and test data of seven of the eight datasets used in the paper.
## BFD Tokenizers:
We trained BPE, WordPiece and Unigram tokenizers on samples of proteins from the 2.2 billion protein sequences of the BFD dataset (Steinegger and Söding 2018). We evaluate the average sequences length as a function of the vocabulary size and number of sequences in the training data.



Effect of vocabulary size and number of training samples on the three tokenizers: BPE, WordPiece and Unigram. The darker the color the higher the average number of tokens per protein. Increasing the vocabulary and the training size reduces the number of tokens per protein for all of the tested tokenizers.
We uploaded the "BFD_Tokenizers" which been trained on 10,000,000 sequences randomly sampled from the BFD datasset.
## Github
The code, datasets and trained tokenizers are available on https://github.com/idotan286/BiologicalTokenizers/.
## APA
```
Dotan, E., Jaschek, G., Pupko, T., & Belinkov, Y. (2023). Effect of Tokenization on Transformers for Biological Sequences. bioRxiv. https://doi.org/10.1101/2023.08.15.553415
```
## BibTeX
```
@article{Dotan_Effect_of_Tokenization_2023,
author = {Dotan, Edo and Jaschek, Gal and Pupko, Tal and Belinkov, Yonatan},
doi = {10.1101/2023.08.15.553415},
journal = {bioRxiv},
month = aug,
title = {{Effect of Tokenization on Transformers for Biological Sequences}},
year = {2023}
}
``` |
notbadai/notbad_v1_1_mistral_24b | notbadai | "2025-04-07T15:51:20Z" | 0 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2403.09629",
"arxiv:2503.20783",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:finetune:mistralai/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T14:58:49Z" | ---
license: apache-2.0
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
pipeline_tag: text-generation
library_name: transformers
---
# Model Card for Notbad v1.1 Mistral 24B
This model has better IFEval scores than our previous model
[Notbad v1.0 Mistral 24B](https://huggingface.co/notbadai/notbad_v1_0_mistral_24b).
Notbad v1.1 Mistral 24B is a reasoning model trained in math and Python coding.
This model is built upon the
[Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501).
and has been further trained with reinforcement learning on math and coding.
One of the key features of Notbad v1.0 is its ability to produce shorter and cleaner reasoning outputs.
We used open datasets and employed reinforcement learning techniques developed continuing
from our work on
[Quiet Star](https://arxiv.org/abs/2403.09629),
and are similar to
[Dr. GRPO](https://arxiv.org/abs/2503.20783).
The reasoning capabilities in this model are from self-improvement and not distilled from any other model.
It is the result of a fine-tuning from data sampled from multiple of our RL models starting with the
[Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501).
Special thanks to [Lambda](https://lambda.ai/) and [Deep Infra](https://deepinfra.com/)
for providing help with compute resources for our research and training this model.
You can try the model on **[chat.labml.ai](https://chat.labml.ai)**.
## Benchmark results
| Evaluation | notbad_v1_1_mistral_24b | notbad_v1_0_mistral_24b | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|-------------------------|-------------------------|---------------------------------|--------------|---------------|-------------|------------------------|
| mmlu_pro | 0.673 | 0.642 | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
| gpqa_main | 0.467 | 0.447 | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
**Math & Coding**
| Evaluation | notbad_v1_0_mistral_24b | notbad_v1_0_mistral_24b | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|-------------------------|-------------------------|---------------------------------|--------------|---------------|-------------|------------------------|
| humaneval | 0.872 | 0.869 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
| math | 0.749 | 0.752 | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
**Instruction following**
| Evaluation | notbad_v1_0_mistral_24b | notbad_v1_0_mistral_24b | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|-------------------------|-------------------------|---------------------------------|--------------|---------------|-------------|------------------------|
| ifeval | 0.779 | 0.514 | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
**Note**:
- Benchmarks are
from [Mistral-Small-24B-Instruct-2501 Model Card](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) |
Bhuvana17/parrots-xzg | Bhuvana17 | "2023-11-07T05:40:50Z" | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-07T05:36:48Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Parrots-xzg Dreambooth model trained by Bhuvana17 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MITS-1965
Sample pictures of this concept:

|
PleIAs/Cassandre-RAG | PleIAs | "2024-10-18T09:29:05Z" | 61 | 6 | null | [
"safetensors",
"llama",
"region:us"
] | null | "2024-09-17T14:32:19Z" | # Cassandre-RAG
Cassandre-RAG is a fine-tuned **llama-3.1-8b model**, built for RAG on French administrative documents, with a focus on sources from school administration.
The model has been trained to expect a predifined input structure, that allows it to very efficiently perform RAG tasks while clearly citing the specific exceprts and the source documents used in the generation of anwsers.
## Training
The model was fine-tuned on a specialized corpus consisting of:
1. Synthetic queries: Generated from chunks of text extracted from French administrative documents.
2. Retrieved documents: For each synthetic query, relevant documents were retrieved using the BM25 ranking algorithm.
3. Generated answers: Responses to the synthetic queries were created based on the retrieved documents.
```yaml
Training Hyperparameters:
Max Steps: 3000
Learning Rate: 3e-4
Batch Size: 2 per device
Gradient Accumulation Steps: 4
Max Sequence Length: 8192
Weight Decay: 0.001
Warmup Ratio: 0.03
LR Scheduler: Linear
Optimizer: paged_adamw_32bit
LoRA Configuration:
LoRA Alpha: 16
LoRA Dropout: 0.1
LoRA R: 64
Target Modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
Quantization:
Quantization: 4-bit
Quantization Type: nf4
Compute Dtype: float16
```
## Usage
Cassandre-RAG uses a custom syntax for parsing sources and generating sourced output.
Each source should be preceded by an ID encapsulated in double asterisks (e.g., \*\*SOURCE_ID\*\*).
The input structure expected by Cassandre is the following
```yaml
prompt = f"""### Query ###\n{user_message}\n\n### Source ###\n{fiches}\n\n### Answer ###\n"""
```
The **"Query"** section consists on the question or keywords that the user inputs.
The **"Source"** part consists of the documents that have been retrieved using a vector database, like duckdb, lancedb or others.
The **"Answer"** indicates the model where it should insert the generated answer to the query, based on the retrieved documents.
This answer will also contain the excerpts of the documents used and the ID of those documents, using this format:
```yaml
<ref text="[Quoted text from source]">[Source ID]</ref>
```
### Example Usage
In this example, we will be using BGE for the embeddings and lancedb for the retrieval part. You can use your preffered embedding model to create the embeddings and add them to the database, LanceDB hybrid search feature allows us to combine vector search with keyword search for better retrieval.
```python
import lancedb
from vllm import LLM, SamplingParams
import pandas as pd
import lancedb
from lancedb.pydantic import LanceModel, Vector
from lancedb.embeddings import get_registry
# Initialize LanceDB
db = lancedb.connect("lancedb_data")
# We will create some ficticious education documents to add to the database
documents = [
{
"hash": "DOC001",
"main_title": "Inscription à l'école primaire",
"text": "L'inscription à l'école primaire en France se fait en deux étapes. Premièrement, les parents doivent se rendre à la mairie avec un justificatif de domicile, le livret de famille et le carnet de santé de l'enfant. Ensuite, ils doivent finaliser l'inscription directement à l'école. L'âge minimal pour l'inscription est de 3 ans."
},
{
"hash": "DOC002",
"main_title": "Calendrier des inscriptions scolaires",
"text": "Les inscriptions à l'école primaire doivent être effectuées au plus tard au mois de juin précédant la rentrée scolaire. Il est conseillé de s'y prendre à l'avance car certaines communes ont des périodes d'inscription spécifiques. La rentrée scolaire a généralement lieu début septembre."
},
{
"hash": "DOC003",
"main_title": "Documents requis pour l'inscription scolaire",
"text": "Pour inscrire un enfant à l'école primaire, les documents suivants sont généralement requis : justificatif de domicile de moins de 3 mois, livret de famille ou extrait d'acte de naissance, carnet de santé avec vaccinations à jour, et éventuellement le certificat de radiation si l'enfant était précédemment inscrit dans une autre école."
}
]
#BGE embedding model loading and creating the structure adapted to lance
model = get_registry().get("sentence-transformers").create(name="BAAI/bge-m3", device="cuda")
class Documents(LanceModel):
main_title: str
text: str = model.SourceField()
hash: str
vector: Vector(model.ndims()) = model.VectorField()
#Create table
table = db.create_table("example", schema=Documents, mode="overwrite")
def process_batch(batch):
processed_documents = []
for item in batch:
try:
processed_documents.append({
"hash": item.get("hash", ""),
"main_title": item.get("main_title", ""),
"text": item.get("text", "")
# Add any other fields you want to include
})
except Exception as e:
print(f"Error processing item: {item}")
print(f"Error message: {str(e)}")
return processed_documents
# Process and add documents in batches
batch_size = 2 # Adjust as needed
for i in tqdm(range(0, len(documents), batch_size)):
batch = documents[i:i+batch_size]
processed_batch = process_batch(batch)
if processed_batch: # Only add if the batch is not empty
table.add(processed_batch)
# Load the model
model_name = "PleIAs/Cassandre-RAG"
llm = LLM(model_name, max_model_len=8128)
# Set sampling parameters
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.95,
max_tokens=3000,
presence_penalty=1.2,
stop=["#END#"]
)
def hybrid_search(text):
results = table.search(text, query_type="hybrid").limit(3).to_pandas()
document = []
for _, row in results.iterrows():
hash_id = str(row['hash'])
title = row['main_title']
content = row['text']
document.append(f"**{hash_id}**\n{title}\n{content}")
return "\n\n".join(document)
def prepare_prompt(query, sources):
return f"### Query ###\n{query}\n\n### Source ###\n{sources}\n\n### Answer ###\n"
# Example query
query = "Quelles sont les démarches pour inscrire un enfant à l'école primaire en France?"
# Perform hybrid search
sources = hybrid_search(query)
# Prepare the prompt
prompt = prepare_prompt(query, sources)
# Generate the response
outputs = llm.generate([prompt], sampling_params)
generated_text = outputs[0].outputs[0].text
print("Query:", query)
print("\nSources:")
print(sources)
print("\nGenerated Response:")
print(generated_text)
print("\nFormatted Response:")
print(simple_format_references(generated_text))
```
Here we have an example of the respnse we will get:
```yaml
### Query ###
Quelles sont les démarches pour inscrire un enfant à l'école primaire en France?
### Source ###
**DOC001**
Inscription à l'école primaire
L'inscription à l'école primaire en France se fait en deux étapes. Premièrement, les parents doivent se rendre à la mairie avec un justificatif de domicile, le livret de famille et le carnet de santé de l'enfant. Ensuite, ils doivent finaliser l'inscription directement à l'école. L'âge minimal pour l'inscription est de 3 ans.
**DOC002**
Calendrier des inscriptions scolaires
Les inscriptions à l'école primaire doivent être effectuées au plus tard au mois de juin précédant la rentrée scolaire. Il est conseillé de s'y prendre à l'avance car certaines communes ont des périodes d'inscription spécifiques. La rentrée scolaire a généralement lieu début septembre.
**DOC003**
Documents requis pour l'inscription scolaire
Pour inscrire un enfant à l'école primaire, les documents suivants sont généralement requis : justificatif de domicile de moins de 3 mois, livret de famille ou extrait d'acte de naissance, carnet de santé avec vaccinations à jour, et éventuellement le certificat de radiation si l'enfant était précédemment inscrit dans une autre école.
### Answer ###
Pour inscrire un enfant à l'école primaire en France :
1. Allez à la mairie avec les documents nécessaires. <ref text="L'inscription à l'école primaire en France se fait en deux étapes. Premièrement, les parents doivent se rendre à la mairie avec un justificatif de domicile, le livret de famille et le carnet de santé de l'enfant">DOC001</ref>
2. Inscrivez-vous avant juin pour la rentrée de septembre. <ref text="Les inscriptions à l'école primaire doivent être effectuées au plus tard au mois de juin précédant la rentrée scolaire">DOC002</ref>
3. L'enfant doit avoir au moins 3 ans. <ref text="L'âge minimal pour l'inscription est de 3 ans">DOC001</ref>
4. Finalisez l'inscription à l'école. <ref text="Ensuite, ils doivent finaliser l'inscription directement à l'école">DOC001</ref>
Apportez un certificat de radiation si l'enfant change d'école. <ref text="et éventuellement le certificat de radiation si l'enfant était précédemment inscrit dans une autre école">DOC003</ref>
Contactez votre mairie pour plus d'informations.
#END#
``` |
PrunaAI/OpenAssistant-falcon-7b-sft-mix-2000-bnb-4bit-smashed | PrunaAI | "2024-08-02T15:47:59Z" | 95 | 0 | transformers | [
"transformers",
"safetensors",
"RefinedWebModel",
"text-generation",
"pruna-ai",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-04T11:15:35Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results
Detailed efficiency metrics coming soon!
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo OpenAssistant/falcon-7b-sft-mix-2000 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/OpenAssistant-falcon-7b-sft-mix-2000-bnb-4bit-smashed",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/falcon-7b-sft-mix-2000")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model OpenAssistant/falcon-7b-sft-mix-2000 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Sahil07/shawgpt-ft | Sahil07 | "2024-04-09T12:01:49Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | "2024-04-09T12:01:44Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: shawgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6351 | 1.0 | 1 | 1.1246 |
| 0.6319 | 2.0 | 2 | 1.0575 |
| 0.6034 | 3.0 | 3 | 0.9250 |
| 0.5414 | 4.0 | 4 | 0.8252 |
| 0.4959 | 5.0 | 5 | 0.7540 |
| 0.5698 | 6.0 | 6 | 0.7035 |
| 0.4303 | 7.0 | 7 | 0.6648 |
| 0.4095 | 8.0 | 8 | 0.6361 |
| 0.3955 | 9.0 | 9 | 0.6165 |
| 0.3857 | 10.0 | 10 | 0.6063 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k19_task5_organization | MayBashendy | "2025-01-21T02:36:36Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-20T19:25:28Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k19_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run2_AugV5_k19_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8534
- Qwk: 0.3360
- Mse: 0.8534
- Rmse: 0.9238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0333 | 2 | 3.8732 | -0.0134 | 3.8732 | 1.9680 |
| No log | 0.0667 | 4 | 2.0080 | 0.0435 | 2.0080 | 1.4170 |
| No log | 0.1 | 6 | 2.1485 | -0.0086 | 2.1485 | 1.4658 |
| No log | 0.1333 | 8 | 1.9538 | 0.0142 | 1.9538 | 1.3978 |
| No log | 0.1667 | 10 | 1.4849 | 0.0613 | 1.4849 | 1.2186 |
| No log | 0.2 | 12 | 1.4659 | 0.0466 | 1.4659 | 1.2108 |
| No log | 0.2333 | 14 | 1.5843 | 0.0834 | 1.5843 | 1.2587 |
| No log | 0.2667 | 16 | 1.8604 | 0.0536 | 1.8604 | 1.3640 |
| No log | 0.3 | 18 | 2.0805 | 0.0790 | 2.0805 | 1.4424 |
| No log | 0.3333 | 20 | 1.5869 | 0.1659 | 1.5869 | 1.2597 |
| No log | 0.3667 | 22 | 1.0519 | 0.1805 | 1.0519 | 1.0256 |
| No log | 0.4 | 24 | 0.9972 | 0.2140 | 0.9972 | 0.9986 |
| No log | 0.4333 | 26 | 1.0004 | 0.3071 | 1.0004 | 1.0002 |
| No log | 0.4667 | 28 | 1.0164 | 0.2243 | 1.0164 | 1.0082 |
| No log | 0.5 | 30 | 1.1727 | 0.1176 | 1.1727 | 1.0829 |
| No log | 0.5333 | 32 | 1.1364 | 0.1296 | 1.1364 | 1.0660 |
| No log | 0.5667 | 34 | 0.9386 | 0.2643 | 0.9386 | 0.9688 |
| No log | 0.6 | 36 | 0.9683 | 0.2467 | 0.9683 | 0.9840 |
| No log | 0.6333 | 38 | 0.9579 | 0.2770 | 0.9579 | 0.9787 |
| No log | 0.6667 | 40 | 0.9888 | 0.2441 | 0.9888 | 0.9944 |
| No log | 0.7 | 42 | 1.1550 | 0.2045 | 1.1550 | 1.0747 |
| No log | 0.7333 | 44 | 1.1415 | 0.2045 | 1.1415 | 1.0684 |
| No log | 0.7667 | 46 | 0.9475 | 0.2967 | 0.9475 | 0.9734 |
| No log | 0.8 | 48 | 0.9103 | 0.3326 | 0.9103 | 0.9541 |
| No log | 0.8333 | 50 | 0.9165 | 0.3326 | 0.9165 | 0.9573 |
| No log | 0.8667 | 52 | 1.1122 | 0.2543 | 1.1122 | 1.0546 |
| No log | 0.9 | 54 | 1.3151 | 0.1815 | 1.3151 | 1.1468 |
| No log | 0.9333 | 56 | 1.3107 | 0.1966 | 1.3107 | 1.1449 |
| No log | 0.9667 | 58 | 1.2015 | 0.2534 | 1.2015 | 1.0961 |
| No log | 1.0 | 60 | 1.1149 | 0.2995 | 1.1149 | 1.0559 |
| No log | 1.0333 | 62 | 1.0577 | 0.4408 | 1.0577 | 1.0284 |
| No log | 1.0667 | 64 | 1.0127 | 0.4627 | 1.0127 | 1.0063 |
| No log | 1.1 | 66 | 1.0160 | 0.4763 | 1.0160 | 1.0080 |
| No log | 1.1333 | 68 | 0.9951 | 0.4524 | 0.9951 | 0.9975 |
| No log | 1.1667 | 70 | 0.9869 | 0.3804 | 0.9869 | 0.9934 |
| No log | 1.2 | 72 | 1.0200 | 0.3618 | 1.0200 | 1.0099 |
| No log | 1.2333 | 74 | 1.0105 | 0.3525 | 1.0105 | 1.0052 |
| No log | 1.2667 | 76 | 0.9997 | 0.3642 | 0.9997 | 0.9999 |
| No log | 1.3 | 78 | 0.9704 | 0.3378 | 0.9704 | 0.9851 |
| No log | 1.3333 | 80 | 0.9867 | 0.4503 | 0.9867 | 0.9933 |
| No log | 1.3667 | 82 | 0.9417 | 0.4244 | 0.9417 | 0.9704 |
| No log | 1.4 | 84 | 0.9221 | 0.4122 | 0.9221 | 0.9602 |
| No log | 1.4333 | 86 | 0.9438 | 0.3925 | 0.9438 | 0.9715 |
| No log | 1.4667 | 88 | 0.9803 | 0.4628 | 0.9803 | 0.9901 |
| No log | 1.5 | 90 | 0.9662 | 0.4045 | 0.9662 | 0.9830 |
| No log | 1.5333 | 92 | 0.9604 | 0.4524 | 0.9604 | 0.9800 |
| No log | 1.5667 | 94 | 0.9789 | 0.3590 | 0.9789 | 0.9894 |
| No log | 1.6 | 96 | 0.8357 | 0.5218 | 0.8357 | 0.9142 |
| No log | 1.6333 | 98 | 0.8138 | 0.5231 | 0.8138 | 0.9021 |
| No log | 1.6667 | 100 | 0.7472 | 0.4676 | 0.7472 | 0.8644 |
| No log | 1.7 | 102 | 0.7386 | 0.5161 | 0.7386 | 0.8594 |
| No log | 1.7333 | 104 | 0.7525 | 0.6092 | 0.7525 | 0.8675 |
| No log | 1.7667 | 106 | 0.8542 | 0.5222 | 0.8542 | 0.9242 |
| No log | 1.8 | 108 | 0.9170 | 0.4802 | 0.9170 | 0.9576 |
| No log | 1.8333 | 110 | 0.9037 | 0.5027 | 0.9037 | 0.9507 |
| No log | 1.8667 | 112 | 0.8325 | 0.4562 | 0.8325 | 0.9124 |
| No log | 1.9 | 114 | 0.7533 | 0.3961 | 0.7533 | 0.8679 |
| No log | 1.9333 | 116 | 0.7816 | 0.4186 | 0.7816 | 0.8841 |
| No log | 1.9667 | 118 | 0.8153 | 0.3025 | 0.8153 | 0.9029 |
| No log | 2.0 | 120 | 0.9298 | 0.2672 | 0.9298 | 0.9643 |
| No log | 2.0333 | 122 | 0.9030 | 0.2978 | 0.9030 | 0.9503 |
| No log | 2.0667 | 124 | 0.8132 | 0.4381 | 0.8132 | 0.9018 |
| No log | 2.1 | 126 | 0.8362 | 0.4346 | 0.8362 | 0.9145 |
| No log | 2.1333 | 128 | 0.8234 | 0.4547 | 0.8234 | 0.9074 |
| No log | 2.1667 | 130 | 0.8154 | 0.5949 | 0.8154 | 0.9030 |
| No log | 2.2 | 132 | 0.8177 | 0.5898 | 0.8177 | 0.9043 |
| No log | 2.2333 | 134 | 0.8164 | 0.5958 | 0.8164 | 0.9036 |
| No log | 2.2667 | 136 | 0.8384 | 0.4949 | 0.8384 | 0.9157 |
| No log | 2.3 | 138 | 0.8629 | 0.4825 | 0.8629 | 0.9289 |
| No log | 2.3333 | 140 | 0.9048 | 0.4279 | 0.9048 | 0.9512 |
| No log | 2.3667 | 142 | 0.7847 | 0.5178 | 0.7847 | 0.8858 |
| No log | 2.4 | 144 | 0.7609 | 0.5759 | 0.7609 | 0.8723 |
| No log | 2.4333 | 146 | 0.7736 | 0.5089 | 0.7736 | 0.8796 |
| No log | 2.4667 | 148 | 0.7624 | 0.5024 | 0.7624 | 0.8732 |
| No log | 2.5 | 150 | 0.7849 | 0.5359 | 0.7849 | 0.8859 |
| No log | 2.5333 | 152 | 0.8208 | 0.4850 | 0.8208 | 0.9060 |
| No log | 2.5667 | 154 | 0.8547 | 0.4751 | 0.8547 | 0.9245 |
| No log | 2.6 | 156 | 0.9148 | 0.4536 | 0.9148 | 0.9565 |
| No log | 2.6333 | 158 | 0.9518 | 0.4002 | 0.9518 | 0.9756 |
| No log | 2.6667 | 160 | 0.8748 | 0.4305 | 0.8748 | 0.9353 |
| No log | 2.7 | 162 | 0.8733 | 0.5002 | 0.8733 | 0.9345 |
| No log | 2.7333 | 164 | 0.8763 | 0.4770 | 0.8763 | 0.9361 |
| No log | 2.7667 | 166 | 0.9401 | 0.3861 | 0.9401 | 0.9696 |
| No log | 2.8 | 168 | 1.0376 | 0.3546 | 1.0376 | 1.0186 |
| No log | 2.8333 | 170 | 0.9847 | 0.3511 | 0.9847 | 0.9923 |
| No log | 2.8667 | 172 | 0.8589 | 0.4244 | 0.8589 | 0.9268 |
| No log | 2.9 | 174 | 0.8472 | 0.3896 | 0.8472 | 0.9204 |
| No log | 2.9333 | 176 | 0.8381 | 0.3896 | 0.8381 | 0.9155 |
| No log | 2.9667 | 178 | 0.8513 | 0.3536 | 0.8513 | 0.9226 |
| No log | 3.0 | 180 | 0.8626 | 0.3583 | 0.8626 | 0.9288 |
| No log | 3.0333 | 182 | 0.8379 | 0.3631 | 0.8379 | 0.9154 |
| No log | 3.0667 | 184 | 0.8223 | 0.3877 | 0.8223 | 0.9068 |
| No log | 3.1 | 186 | 0.8575 | 0.4192 | 0.8575 | 0.9260 |
| No log | 3.1333 | 188 | 0.8519 | 0.4456 | 0.8519 | 0.9230 |
| No log | 3.1667 | 190 | 0.8071 | 0.5046 | 0.8071 | 0.8984 |
| No log | 3.2 | 192 | 0.7988 | 0.5331 | 0.7988 | 0.8938 |
| No log | 3.2333 | 194 | 0.7967 | 0.4869 | 0.7967 | 0.8926 |
| No log | 3.2667 | 196 | 0.7910 | 0.4428 | 0.7910 | 0.8894 |
| No log | 3.3 | 198 | 0.8053 | 0.4466 | 0.8053 | 0.8974 |
| No log | 3.3333 | 200 | 0.8726 | 0.4021 | 0.8726 | 0.9341 |
| No log | 3.3667 | 202 | 0.8671 | 0.4370 | 0.8671 | 0.9312 |
| No log | 3.4 | 204 | 0.8449 | 0.4069 | 0.8449 | 0.9192 |
| No log | 3.4333 | 206 | 0.8556 | 0.4069 | 0.8556 | 0.9250 |
| No log | 3.4667 | 208 | 0.8625 | 0.3873 | 0.8625 | 0.9287 |
| No log | 3.5 | 210 | 0.8911 | 0.4130 | 0.8911 | 0.9440 |
| No log | 3.5333 | 212 | 0.8736 | 0.4486 | 0.8736 | 0.9347 |
| No log | 3.5667 | 214 | 0.8828 | 0.4490 | 0.8828 | 0.9396 |
| No log | 3.6 | 216 | 0.9005 | 0.4733 | 0.9005 | 0.9489 |
| No log | 3.6333 | 218 | 0.9082 | 0.4734 | 0.9082 | 0.9530 |
| No log | 3.6667 | 220 | 0.9024 | 0.4838 | 0.9024 | 0.9499 |
| No log | 3.7 | 222 | 0.8794 | 0.4907 | 0.8794 | 0.9378 |
| No log | 3.7333 | 224 | 0.8729 | 0.4751 | 0.8729 | 0.9343 |
| No log | 3.7667 | 226 | 0.8451 | 0.4989 | 0.8451 | 0.9193 |
| No log | 3.8 | 228 | 0.8301 | 0.5023 | 0.8301 | 0.9111 |
| No log | 3.8333 | 230 | 0.8249 | 0.4871 | 0.8249 | 0.9082 |
| No log | 3.8667 | 232 | 0.8301 | 0.5340 | 0.8301 | 0.9111 |
| No log | 3.9 | 234 | 0.7927 | 0.5909 | 0.7927 | 0.8903 |
| No log | 3.9333 | 236 | 0.8088 | 0.5678 | 0.8088 | 0.8993 |
| No log | 3.9667 | 238 | 0.8651 | 0.4584 | 0.8651 | 0.9301 |
| No log | 4.0 | 240 | 0.7968 | 0.5298 | 0.7968 | 0.8926 |
| No log | 4.0333 | 242 | 0.7279 | 0.6597 | 0.7279 | 0.8532 |
| No log | 4.0667 | 244 | 0.7312 | 0.5810 | 0.7312 | 0.8551 |
| No log | 4.1 | 246 | 0.7008 | 0.5809 | 0.7008 | 0.8371 |
| No log | 4.1333 | 248 | 0.7665 | 0.4686 | 0.7665 | 0.8755 |
| No log | 4.1667 | 250 | 0.8644 | 0.4942 | 0.8644 | 0.9297 |
| No log | 4.2 | 252 | 0.8246 | 0.4926 | 0.8246 | 0.9081 |
| No log | 4.2333 | 254 | 0.7331 | 0.5179 | 0.7331 | 0.8562 |
| No log | 4.2667 | 256 | 0.7612 | 0.5477 | 0.7612 | 0.8725 |
| No log | 4.3 | 258 | 0.8132 | 0.5046 | 0.8132 | 0.9018 |
| No log | 4.3333 | 260 | 0.7828 | 0.5305 | 0.7828 | 0.8847 |
| No log | 4.3667 | 262 | 0.7450 | 0.4643 | 0.7450 | 0.8632 |
| No log | 4.4 | 264 | 0.7704 | 0.4510 | 0.7704 | 0.8777 |
| No log | 4.4333 | 266 | 0.8144 | 0.4686 | 0.8144 | 0.9025 |
| No log | 4.4667 | 268 | 0.7945 | 0.4563 | 0.7945 | 0.8913 |
| No log | 4.5 | 270 | 0.7498 | 0.5714 | 0.7498 | 0.8659 |
| No log | 4.5333 | 272 | 0.7884 | 0.5917 | 0.7884 | 0.8879 |
| No log | 4.5667 | 274 | 0.8044 | 0.6082 | 0.8044 | 0.8969 |
| No log | 4.6 | 276 | 0.7978 | 0.5558 | 0.7978 | 0.8932 |
| No log | 4.6333 | 278 | 0.7605 | 0.5436 | 0.7605 | 0.8721 |
| No log | 4.6667 | 280 | 0.7395 | 0.5587 | 0.7395 | 0.8599 |
| No log | 4.7 | 282 | 0.7372 | 0.5587 | 0.7372 | 0.8586 |
| No log | 4.7333 | 284 | 0.7532 | 0.5331 | 0.7532 | 0.8679 |
| No log | 4.7667 | 286 | 0.7799 | 0.5413 | 0.7799 | 0.8831 |
| No log | 4.8 | 288 | 0.7644 | 0.5552 | 0.7644 | 0.8743 |
| No log | 4.8333 | 290 | 0.7421 | 0.4511 | 0.7421 | 0.8615 |
| No log | 4.8667 | 292 | 0.7271 | 0.3569 | 0.7271 | 0.8527 |
| No log | 4.9 | 294 | 0.7279 | 0.3548 | 0.7279 | 0.8532 |
| No log | 4.9333 | 296 | 0.7324 | 0.5559 | 0.7324 | 0.8558 |
| No log | 4.9667 | 298 | 0.6967 | 0.5359 | 0.6967 | 0.8347 |
| No log | 5.0 | 300 | 0.6645 | 0.5287 | 0.6645 | 0.8152 |
| No log | 5.0333 | 302 | 0.6654 | 0.5536 | 0.6654 | 0.8157 |
| No log | 5.0667 | 304 | 0.6723 | 0.5314 | 0.6723 | 0.8199 |
| No log | 5.1 | 306 | 0.6685 | 0.5314 | 0.6685 | 0.8176 |
| No log | 5.1333 | 308 | 0.6710 | 0.5672 | 0.6710 | 0.8192 |
| No log | 5.1667 | 310 | 0.6696 | 0.6195 | 0.6696 | 0.8183 |
| No log | 5.2 | 312 | 0.6642 | 0.5680 | 0.6642 | 0.8150 |
| No log | 5.2333 | 314 | 0.6763 | 0.5785 | 0.6763 | 0.8224 |
| No log | 5.2667 | 316 | 0.7082 | 0.5674 | 0.7082 | 0.8416 |
| No log | 5.3 | 318 | 0.7030 | 0.5315 | 0.7030 | 0.8385 |
| No log | 5.3333 | 320 | 0.7039 | 0.4156 | 0.7039 | 0.8390 |
| No log | 5.3667 | 322 | 0.7149 | 0.4156 | 0.7149 | 0.8455 |
| No log | 5.4 | 324 | 0.7456 | 0.5328 | 0.7456 | 0.8635 |
| No log | 5.4333 | 326 | 0.7674 | 0.5005 | 0.7674 | 0.8760 |
| No log | 5.4667 | 328 | 0.7170 | 0.5093 | 0.7170 | 0.8468 |
| No log | 5.5 | 330 | 0.6801 | 0.5905 | 0.6801 | 0.8247 |
| No log | 5.5333 | 332 | 0.7038 | 0.5626 | 0.7038 | 0.8390 |
| No log | 5.5667 | 334 | 0.6904 | 0.5626 | 0.6904 | 0.8309 |
| No log | 5.6 | 336 | 0.6881 | 0.5577 | 0.6881 | 0.8295 |
| No log | 5.6333 | 338 | 0.6872 | 0.4776 | 0.6872 | 0.8290 |
| No log | 5.6667 | 340 | 0.6981 | 0.5054 | 0.6981 | 0.8355 |
| No log | 5.7 | 342 | 0.7004 | 0.5054 | 0.7004 | 0.8369 |
| No log | 5.7333 | 344 | 0.7283 | 0.5678 | 0.7283 | 0.8534 |
| No log | 5.7667 | 346 | 0.8332 | 0.5436 | 0.8332 | 0.9128 |
| No log | 5.8 | 348 | 0.9019 | 0.5317 | 0.9019 | 0.9497 |
| No log | 5.8333 | 350 | 0.8504 | 0.5543 | 0.8504 | 0.9222 |
| No log | 5.8667 | 352 | 0.7910 | 0.5366 | 0.7910 | 0.8894 |
| No log | 5.9 | 354 | 0.7200 | 0.6207 | 0.7200 | 0.8485 |
| No log | 5.9333 | 356 | 0.7218 | 0.5179 | 0.7218 | 0.8496 |
| No log | 5.9667 | 358 | 0.7238 | 0.5179 | 0.7238 | 0.8508 |
| No log | 6.0 | 360 | 0.7296 | 0.6288 | 0.7296 | 0.8542 |
| No log | 6.0333 | 362 | 0.7479 | 0.6110 | 0.7479 | 0.8648 |
| No log | 6.0667 | 364 | 0.7339 | 0.6022 | 0.7339 | 0.8567 |
| No log | 6.1 | 366 | 0.7301 | 0.6129 | 0.7301 | 0.8545 |
| No log | 6.1333 | 368 | 0.7229 | 0.6112 | 0.7229 | 0.8502 |
| No log | 6.1667 | 370 | 0.7183 | 0.6634 | 0.7183 | 0.8476 |
| No log | 6.2 | 372 | 0.7126 | 0.6229 | 0.7126 | 0.8442 |
| No log | 6.2333 | 374 | 0.7363 | 0.5153 | 0.7363 | 0.8581 |
| No log | 6.2667 | 376 | 0.7584 | 0.4295 | 0.7584 | 0.8709 |
| No log | 6.3 | 378 | 0.7692 | 0.4477 | 0.7692 | 0.8770 |
| No log | 6.3333 | 380 | 0.7427 | 0.4592 | 0.7427 | 0.8618 |
| No log | 6.3667 | 382 | 0.7113 | 0.5166 | 0.7113 | 0.8434 |
| No log | 6.4 | 384 | 0.7395 | 0.6247 | 0.7395 | 0.8599 |
| No log | 6.4333 | 386 | 0.8074 | 0.6260 | 0.8074 | 0.8986 |
| No log | 6.4667 | 388 | 0.7949 | 0.6198 | 0.7949 | 0.8916 |
| No log | 6.5 | 390 | 0.7828 | 0.6318 | 0.7828 | 0.8848 |
| No log | 6.5333 | 392 | 0.7452 | 0.6188 | 0.7452 | 0.8632 |
| No log | 6.5667 | 394 | 0.7295 | 0.5808 | 0.7295 | 0.8541 |
| No log | 6.6 | 396 | 0.7441 | 0.5375 | 0.7441 | 0.8626 |
| No log | 6.6333 | 398 | 0.7745 | 0.4987 | 0.7745 | 0.8801 |
| No log | 6.6667 | 400 | 0.8245 | 0.4025 | 0.8245 | 0.9080 |
| No log | 6.7 | 402 | 0.8315 | 0.4025 | 0.8315 | 0.9119 |
| No log | 6.7333 | 404 | 0.7928 | 0.4715 | 0.7928 | 0.8904 |
| No log | 6.7667 | 406 | 0.7590 | 0.4776 | 0.7590 | 0.8712 |
| No log | 6.8 | 408 | 0.7635 | 0.4594 | 0.7635 | 0.8738 |
| No log | 6.8333 | 410 | 0.7613 | 0.5179 | 0.7613 | 0.8725 |
| No log | 6.8667 | 412 | 0.7659 | 0.4838 | 0.7659 | 0.8752 |
| No log | 6.9 | 414 | 0.7871 | 0.5500 | 0.7871 | 0.8872 |
| No log | 6.9333 | 416 | 0.7997 | 0.5763 | 0.7997 | 0.8943 |
| No log | 6.9667 | 418 | 0.7762 | 0.4898 | 0.7762 | 0.8810 |
| No log | 7.0 | 420 | 0.7729 | 0.4691 | 0.7729 | 0.8792 |
| No log | 7.0333 | 422 | 0.7768 | 0.4941 | 0.7768 | 0.8813 |
| No log | 7.0667 | 424 | 0.7857 | 0.4719 | 0.7857 | 0.8864 |
| No log | 7.1 | 426 | 0.8169 | 0.5331 | 0.8169 | 0.9038 |
| No log | 7.1333 | 428 | 0.8712 | 0.4821 | 0.8712 | 0.9334 |
| No log | 7.1667 | 430 | 0.8486 | 0.4343 | 0.8486 | 0.9212 |
| No log | 7.2 | 432 | 0.8093 | 0.4220 | 0.8093 | 0.8996 |
| No log | 7.2333 | 434 | 0.7696 | 0.4878 | 0.7696 | 0.8773 |
| No log | 7.2667 | 436 | 0.7606 | 0.4722 | 0.7606 | 0.8721 |
| No log | 7.3 | 438 | 0.7535 | 0.4936 | 0.7535 | 0.8681 |
| No log | 7.3333 | 440 | 0.7846 | 0.5690 | 0.7846 | 0.8858 |
| No log | 7.3667 | 442 | 0.8928 | 0.5224 | 0.8928 | 0.9449 |
| No log | 7.4 | 444 | 0.9639 | 0.4641 | 0.9639 | 0.9818 |
| No log | 7.4333 | 446 | 0.9254 | 0.5023 | 0.9254 | 0.9620 |
| No log | 7.4667 | 448 | 0.7936 | 0.6487 | 0.7936 | 0.8909 |
| No log | 7.5 | 450 | 0.6927 | 0.6493 | 0.6927 | 0.8323 |
| No log | 7.5333 | 452 | 0.6944 | 0.5434 | 0.6944 | 0.8333 |
| No log | 7.5667 | 454 | 0.7139 | 0.5353 | 0.7139 | 0.8449 |
| No log | 7.6 | 456 | 0.7001 | 0.5131 | 0.7001 | 0.8367 |
| No log | 7.6333 | 458 | 0.6917 | 0.5635 | 0.6917 | 0.8317 |
| No log | 7.6667 | 460 | 0.7447 | 0.5131 | 0.7447 | 0.8629 |
| No log | 7.7 | 462 | 0.7970 | 0.5766 | 0.7970 | 0.8928 |
| No log | 7.7333 | 464 | 0.7959 | 0.6151 | 0.7959 | 0.8922 |
| No log | 7.7667 | 466 | 0.7106 | 0.6142 | 0.7106 | 0.8430 |
| No log | 7.8 | 468 | 0.6778 | 0.5981 | 0.6778 | 0.8233 |
| No log | 7.8333 | 470 | 0.6696 | 0.5835 | 0.6696 | 0.8183 |
| No log | 7.8667 | 472 | 0.6387 | 0.6476 | 0.6387 | 0.7992 |
| No log | 7.9 | 474 | 0.6564 | 0.5963 | 0.6564 | 0.8102 |
| No log | 7.9333 | 476 | 0.6833 | 0.5787 | 0.6833 | 0.8266 |
| No log | 7.9667 | 478 | 0.6721 | 0.5248 | 0.6721 | 0.8198 |
| No log | 8.0 | 480 | 0.6802 | 0.5706 | 0.6802 | 0.8248 |
| No log | 8.0333 | 482 | 0.7351 | 0.5141 | 0.7351 | 0.8574 |
| No log | 8.0667 | 484 | 0.8316 | 0.4481 | 0.8316 | 0.9119 |
| No log | 8.1 | 486 | 0.8060 | 0.5270 | 0.8060 | 0.8978 |
| No log | 8.1333 | 488 | 0.7043 | 0.5493 | 0.7043 | 0.8392 |
| No log | 8.1667 | 490 | 0.6670 | 0.5530 | 0.6670 | 0.8167 |
| No log | 8.2 | 492 | 0.6909 | 0.5877 | 0.6909 | 0.8312 |
| No log | 8.2333 | 494 | 0.7202 | 0.6554 | 0.7202 | 0.8486 |
| No log | 8.2667 | 496 | 0.7161 | 0.6293 | 0.7161 | 0.8462 |
| No log | 8.3 | 498 | 0.7869 | 0.6326 | 0.7869 | 0.8871 |
| 0.2637 | 8.3333 | 500 | 0.9139 | 0.5013 | 0.9139 | 0.9560 |
| 0.2637 | 8.3667 | 502 | 0.9650 | 0.3881 | 0.9650 | 0.9823 |
| 0.2637 | 8.4 | 504 | 0.9133 | 0.4208 | 0.9133 | 0.9557 |
| 0.2637 | 8.4333 | 506 | 0.8102 | 0.4044 | 0.8102 | 0.9001 |
| 0.2637 | 8.4667 | 508 | 0.7391 | 0.5260 | 0.7391 | 0.8597 |
| 0.2637 | 8.5 | 510 | 0.7211 | 0.5260 | 0.7211 | 0.8491 |
| 0.2637 | 8.5333 | 512 | 0.7511 | 0.5637 | 0.7511 | 0.8667 |
| 0.2637 | 8.5667 | 514 | 0.8472 | 0.5044 | 0.8472 | 0.9204 |
| 0.2637 | 8.6 | 516 | 0.9148 | 0.5119 | 0.9148 | 0.9565 |
| 0.2637 | 8.6333 | 518 | 0.9354 | 0.5219 | 0.9354 | 0.9672 |
| 0.2637 | 8.6667 | 520 | 0.9819 | 0.4681 | 0.9819 | 0.9909 |
| 0.2637 | 8.7 | 522 | 0.9180 | 0.4815 | 0.9180 | 0.9581 |
| 0.2637 | 8.7333 | 524 | 0.8448 | 0.4216 | 0.8448 | 0.9192 |
| 0.2637 | 8.7667 | 526 | 0.8285 | 0.3941 | 0.8285 | 0.9102 |
| 0.2637 | 8.8 | 528 | 0.8534 | 0.3360 | 0.8534 | 0.9238 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lesso10/17dbf767-6ad8-4be2-9e84-a857af344278 | lesso10 | "2025-03-16T11:21:29Z" | 19 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-12T09:22:39Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 17dbf767-6ad8-4be2-9e84-a857af344278
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 17dbf767-6ad8-4be2-9e84-a857af344278
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 5.9515 |
| 3.1327 | 0.1411 | 500 | 3.1254 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
isspek/roberta-base_monkeypox_4_2e-5_16_undersampling_0.1 | isspek | "2024-12-15T17:04:23Z" | 184 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-15T17:04:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
l3xx/resul5757 | l3xx | "2024-09-25T16:44:30Z" | 25 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-08-19T20:25:19Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: R3Sul
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# resul5757
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `F4RID4` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format. |
Shawon16/VideoMAE_BdSLW60_FrameRate_Corrected_with_Augment_20_epoch_RQ | Shawon16 | "2025-01-12T08:43:01Z" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2025-01-11T16:02:02Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: VideoMAE_BdSLW60_FrameRate_Corrected_with_Augment_20_epoch_RQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VideoMAE_BdSLW60_FrameRate_Corrected_with_Augment_20_epoch_RQ
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6708
- Accuracy: 0.895
- Precision: 0.9032
- Recall: 0.895
- F1: 0.8843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 18560
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 10.4448 | 0.0501 | 929 | 2.5241 | 0.3483 | 0.3333 | 0.3483 | 0.2892 |
| 2.3392 | 1.0501 | 1858 | 0.8054 | 0.7983 | 0.8006 | 0.7983 | 0.7746 |
| 0.8833 | 2.0501 | 2787 | 0.5077 | 0.8467 | 0.8835 | 0.8467 | 0.8400 |
| 0.3893 | 3.0501 | 3716 | 0.5973 | 0.8783 | 0.8885 | 0.8783 | 0.8613 |
| 0.4181 | 4.0501 | 4645 | 1.0255 | 0.795 | 0.8179 | 0.795 | 0.7654 |
| 0.144 | 5.0501 | 5574 | 0.4972 | 0.905 | 0.9065 | 0.905 | 0.8870 |
| 0.2381 | 6.0501 | 6503 | 1.1021 | 0.8017 | 0.8541 | 0.8017 | 0.7955 |
| 0.2042 | 7.0501 | 7432 | 0.9343 | 0.855 | 0.8919 | 0.855 | 0.8343 |
| 0.1844 | 8.0501 | 8361 | 0.4798 | 0.9083 | 0.9237 | 0.9083 | 0.9061 |
| 0.1416 | 9.0501 | 9290 | 0.5504 | 0.9 | 0.9332 | 0.9 | 0.8899 |
| 0.1182 | 10.0501 | 10219 | 0.3593 | 0.9317 | 0.9462 | 0.9317 | 0.9311 |
| 0.0255 | 11.0501 | 11148 | 0.5179 | 0.9 | 0.9307 | 0.9 | 0.8987 |
| 0.1122 | 12.0501 | 12077 | 0.5793 | 0.9017 | 0.9192 | 0.9017 | 0.8968 |
| 0.0681 | 13.0501 | 13006 | 0.6389 | 0.9133 | 0.9368 | 0.9133 | 0.9056 |
| 0.047 | 14.0501 | 13935 | 0.5920 | 0.9067 | 0.9284 | 0.9067 | 0.9054 |
| 0.0057 | 15.0501 | 14864 | 0.6708 | 0.895 | 0.9032 | 0.895 | 0.8843 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
|
KingEmpire/Dinant_4 | KingEmpire | "2025-02-28T09:10:47Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-28T08:43:47Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MultiBertGunjanPatrick/multiberts-seed-4-140k | MultiBertGunjanPatrick | "2021-10-04T05:10:19Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:04Z" | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 140k (uncased)
Seed 4 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-140k')
model = BertModel.from_pretrained("multiberts-seed-4-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
hungryc9/deval3-1 | hungryc9 | "2025-02-21T10:38:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-21T06:59:57Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
keras/vgg_13_imagenet | keras | "2025-03-24T22:42:32Z" | 20 | 0 | keras-hub | [
"keras-hub",
"image-classification",
"arxiv:1409.1556",
"license:apache-2.0",
"region:us"
] | image-classification | "2024-10-28T21:40:12Z" | ---
library_name: keras-hub
license: apache-2.0
tags:
- image-classification
pipeline_tag: image-classification
---
### Model Overview
The VGG model is a type of convolutional neural network (CNN) architecture designed for image recognition and classification tasks. Developed by the Visual Geometry Group at the University of Oxford, it was introduced in the paper titled "Very Deep Convolutional Networks for Large-Scale Image Recognition" by Karen Simonyan and Andrew Zisserman in 2014. This model is supported in both KerasCV and KerasHub. KerasCV will no longer be actively developed, so please try to use KerasHub.
## Links
* [VGG Quickstart Notebook](https://www.kaggle.com/code/prasadsachin/vgg-quickstart-kerashub)
* [VGG paper](https://arxiv.org/abs/1409.1556)
* [VGG API Documentation](https://keras.io/keras_hub/api/models/vgg/)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-Hub
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Weights have been ported from https://huggingface.co/timm.
| Preset Name | Parameters | Description |
|------------------|------------|----------------------------------------------------------------|
| vgg_11_imagenet | 9.22M | 11-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_13_imagenet | 9.40M | 13-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_16_imagenet | 14.71M | 16-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_19_imagenet | 20.02M | 19-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
## Example Usage
```python
input_data = np.ones(shape=(2, 224, 224, 3))
# Pretrained backbone
model = keras_hub.models.VGGBackbone.from_preset("vgg_13_imagenet")
model(input_data)
# Randomly initialized backbone with a custom config
model = keras_hub.models.VGGBackbone(
stackwise_num_repeats=[2, 3, 3, 2],
stackwise_num_filters=[64, 128, 256, 512],
)
model(input_data)
# Use VGG for image classification task
model = keras_hub.models.ImageClassifier.from_preset("vgg_13_imagenet")
# User Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/vgg11.tv_in1k')
```
## Example Usage with Hugging Face URI
```python
input_data = np.ones(shape=(2, 224, 224, 3))
# Pretrained backbone
model = keras_hub.models.VGGBackbone.from_preset("hf://keras/vgg_13_imagenet")
model(input_data)
# Randomly initialized backbone with a custom config
model = keras_hub.models.VGGBackbone(
stackwise_num_repeats=[2, 3, 3, 2],
stackwise_num_filters=[64, 128, 256, 512],
)
model(input_data)
# Use VGG for image classification task
model = keras_hub.models.ImageClassifier.from_preset("hf://keras/vgg_13_imagenet")
# User Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/vgg11.tv_in1k')
```
|
Helsinki-NLP/opus-mt-fi-es | Helsinki-NLP | "2023-08-16T11:34:28Z" | 189 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fi",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-es
* source languages: fi
* target languages: es
* OPUS readme: [fi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.es | 51.5 | 0.700 |
|
visdata/bang3 | visdata | "2025-02-02T07:16:58Z" | 55 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-02T07:06:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gabriel99Terror/jayce_mel | gabriel99Terror | "2025-01-12T02:35:10Z" | 22 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-12T02:35:08Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Jayce_Mel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gabriel99Terror/jayce_mel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF | Triangle104 | "2025-01-06T07:33:58Z" | 7 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-06T07:33:53Z" | ---
base_model: Triangle104/AwA-Dolphin_0.6b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF
This model was converted to GGUF format from [`Triangle104/AwA-Dolphin_0.6b`](https://huggingface.co/Triangle104/AwA-Dolphin_0.6b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Triangle104/AwA-Dolphin_0.6b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF --hf-file awa-dolphin_0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF --hf-file awa-dolphin_0.6b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF --hf-file awa-dolphin_0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AwA-Dolphin_0.6b-Q4_K_M-GGUF --hf-file awa-dolphin_0.6b-q4_k_m.gguf -c 2048
```
|
Subsets and Splits