modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 06:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 06:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Mikezeng/task-13-google-gemma-2b | Mikezeng | 2024-11-27T03:19:33Z | 10 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-10-14T03:33:56Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
RoyJoy/llama_dec27 | RoyJoy | 2024-11-27T03:18:46Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T03:15:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NyanDoggo/Qwen2.5-Coder-7B-Instruct-Spider-Baseline | NyanDoggo | 2024-11-27T03:17:47Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2024-11-27T00:58:50Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
Mikezeng/task-13-Qwen-Qwen1.5-1.8B | Mikezeng | 2024-11-27T03:16:38Z | 34 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-10-14T03:31:05Z | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Mikezeng/task-13-Qwen-Qwen1.5-0.5B | Mikezeng | 2024-11-27T03:13:16Z | 15 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-10-14T03:27:53Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
gowhyyou/task-13-google-gemma-2b | gowhyyou | 2024-11-27T03:09:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-10-14T03:24:36Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
DavidLanz/text2cypher-gemma-2-9b-it-finetuned-2024v1 | DavidLanz | 2024-11-27T03:07:22Z | 145 | 3 | transformers | [
"transformers",
"gguf",
"conversational",
"neo4j",
"cypher",
"text2cypher",
"text2text-generation",
"en",
"dataset:neo4j/text2cypher-2024v1",
"arxiv:1910.09700",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-27T02:50:37Z | ---
license: gemma
library_name: transformers
pipeline_tag: text2text-generation
tags:
- conversational
- neo4j
- cypher
- text2cypher
base_model: google/gemma-2-9b-it
datasets:
- neo4j/text2cypher-2024v1
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
This model serves as a demonstration of how fine-tuning foundational models using the Neo4j-Text2Cypher(2024) Dataset ([link](https://huggingface.co/datasets/neo4j/text2cypher-2024v1)) can enhance performance on the Text2Cypher task.\
Please **note**, this is part of ongoing research and exploration, aimed at highlighting the dataset's potential rather than a production-ready solution.
**Base model:** google/gemma-2-9b-it \
**Dataset:** neo4j/text2cypher-2024v1
An overview of the finetuned models and benchmarking results are shared at [Link1](https://medium.com/p/d77be96ab65a) and [Link2](https://medium.com/p/b2203d1173b0)
Have ideas or insights? Contact us: [Neo4j/Team-GenAI](mailto:[email protected])
<!-- - **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed] -->
<!-- ### Model Sources [optional]
<!-- Provide the basic links for the model. -->
<!-- - **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
<!-- ## Uses -->
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- ### Direct Use -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- [More Information Needed] -->
<!-- ### Downstream Use [optional] -->
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- [More Information Needed] -->
<!-- ### Out-of-Scope Use
-->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- [More Information Needed] -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We need to be cautious about a few risks:
* In our evaluation setup, the training and test sets come from the same data distribution (sampled from a larger dataset). If the data distribution changes, the results may not follow the same pattern.
* The datasets used were gathered from publicly available sources. Over time, foundational models may access both the training and test sets, potentially achieving similar or even better results.
Also check the related blogpost:[Link](Thttps://medium.com/p/b2203d1173b0)
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
<!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. -->
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed] -->
## Training Details
<!-- ### Training Data -->
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
<!-- [More Information Needed]-->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Used RunPod with following setup:
* 1 x A100 PCIe
* 31 vCPU 117 GB RAM
* runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04
* On-Demand - Secure Cloud
* 60 GB Disk
* 60 GB Pod Volume
<!-- * ~16 hours
* $30 -->
<!-- #### Preprocessing [optional]
[More Information Needed]
-->
#### Training Hyperparameters
<!-- - **Training regime:** -->
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
* lora_config = LoraConfig(
r=64,
lora_alpha=64,
target_modules=target_modules,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
* sft_config = SFTConfig(
dataset_text_field=dataset_text_field,
per_device_train_batch_size=4,
gradient_accumulation_steps=8,
dataset_num_proc=16,
max_seq_length=1600,
logging_dir="./logs",
num_train_epochs=1,
learning_rate=2e-5,
save_steps=5,
save_total_limit=1,
logging_steps=5,
output_dir="outputs",
optim="paged_adamw_8bit",
save_strategy="steps",
)
* bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
<!-- #### Speeds, Sizes, Times [optional] -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ### Testing Data, Factors & Metrics -->
<!-- #### Testing Data -->
<!-- This should link to a Dataset Card if possible. -->
<!-- [More Information Needed] -->
<!-- #### Factors -->
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!-- [More Information Needed]
#### Metrics -->
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!-- [More Information Needed]
### Results
[More Information Needed]
#### Summary -->
<!-- ## Model Examination [optional]
-->
<!-- Relevant interpretability work for the model goes here -->
<!-- [More Information Needed]
## Environmental Impact -->
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]-->
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional] -->
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
<!-- [More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] -->
### Framework versions
- PEFT 0.12.0
### Example Cypher generation
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "DavidLanz/text2cypher-gemma-2-9b-it-finetuned-2024v1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float32,
device_map="auto",
low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "What are the movies of Tom Hanks?"
schema = "(:Actor)-[:ActedIn]->(:Movie)"
instruction = (
"Generate Cypher statement to query a graph database. "
"Use only the provided relationship types and properties in the schema. \n"
"Schema: {schema} \n Question: {question} \n Cypher output: "
)
prompt = instruction.format(schema=schema, question=question)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
model.eval()
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=512)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated Cypher Query:", generated_text)
def prepare_chat_prompt(question, schema):
chat = [
{
"role": "user",
"content": instruction.format(
schema=schema, question=question
),
}
]
return chat
def _postprocess_output_cypher(output_cypher: str) -> str:
# Remove any explanation or formatting markers
partition_by = "**Explanation:**"
output_cypher, _, _ = output_cypher.partition(partition_by)
output_cypher = output_cypher.strip("`\n")
output_cypher = output_cypher.lstrip("cypher\n")
output_cypher = output_cypher.strip("`\n ")
return output_cypher
new_message = prepare_chat_prompt(question=question, schema=schema)
try:
prompt = tokenizer.apply_chat_template(new_message, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt", padding=True).to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=512)
chat_generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
final_cypher = _postprocess_output_cypher(chat_generated_text)
print("Processed Cypher Query:", final_cypher)
except AttributeError:
print("Error: `apply_chat_template` not supported by this tokenizer. Check compatibility.")
``` |
xabackus/sexism-detector-Spanish-8842e-3001 | xabackus | 2024-11-27T02:59:42Z | 185 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T02:49:19Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8842e-3001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8842e-3001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4665
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.812 | 1.0 | 225 | 0.5324 | 0.8246 | 0.7453 |
| 0.5378 | 2.0 | 450 | 0.4644 | 0.8246 | 0.7453 |
| 0.5341 | 3.0 | 675 | 0.4940 | 0.8246 | 0.7453 |
| 0.4686 | 4.0 | 900 | 0.4665 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
BigHuggyD/TheDrummer_Behemoth-123B-v2.1_exl2_6.0bpw_h6 | BigHuggyD | 2024-11-27T02:49:36Z | 6 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | 2024-11-27T02:43:13Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v2.1 🦣
> Nothing in the void is foreign to us. The place we go is the place we belong.

## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v2.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v2.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v2.1-GGUF (recommended for smaller quants)
## Description
Behemoth v2.x is a finetune of the new Largestral 2411 with system prompt support. Testers have noted that **everything** felt improved.
### Usage
Testers say this frankenformat maximizes the model's potential: **Metharme** with Mistral's new system tokens
- `[SYSTEM_PROMPT] <|system|>{{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
- `<|system|>[SYSTEM_PROMPT] {{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
*Take note that the opening system tag SHOULD ALWAYS have a leading whitespace after it.*
Complete SillyTavern Settings in BeaverAI Club: https://discord.com/channels/1238219753324281886/1309968730301792370/1309968730301792370
### Versions
- [v2.0](https://huggingface.co/TheDrummer/Behemoth-123B-v2) is equivalent to Behemoth v1.0 (Classic)
- [v2.1](https://huggingface.co/TheDrummer/Behemoth-123B-v2.1) is equivalent to Behemoth v1.1 (Creative Boost)
- [v2.2](https://huggingface.co/TheDrummer/Behemoth-123B-v2.2) is an improvement of Behemoth v2.1 (Creative++)
## Special Thanks
Thank you to each and everyone who donated/subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) 🙇 I hope to never disappoint!
```
Toasty Pigeon
theguywhogamesalot
Grozi
F
Marinara
Ko-fi Supporter
Grozi
Phaelon
ONTHEREDTEAM
EvarinSharath'fe(USM-Valor)
Silva
Dakkidaze
AlexTheVP
Pseudo
Kistara
Dr. Fjut
Grozi 🥈
KinjiHakari777
dustywintr
Syd
HumbleConsumer
Syd
Ko-fi Supporter
Arkamist
joe 🥇
Toad
Lied
Konnect
Kistara
Grozi 🥉
SleepDeprived3
Luigi
Nestor
```
https://ko-fi.com/thedrummer/leaderboard
```
Finetuned by yours truly,
Drummer
```

|
mradermacher/BigWeave-v6-90b-i1-GGUF | mradermacher | 2024-11-27T02:49:09Z | 147 | 1 | transformers | [
"transformers",
"gguf",
"Xwin",
"Euryale 1.3",
"frankenmerge",
"90b",
"en",
"base_model:llmixer/BigWeave-v6-90b",
"base_model:quantized:llmixer/BigWeave-v6-90b",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-26T08:07:43Z | ---
base_model: llmixer/BigWeave-v6-90b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- Xwin
- Euryale 1.3
- frankenmerge
- 90b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/llmixer/BigWeave-v6-90b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BigWeave-v6-90b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ1_S.gguf) | i1-IQ1_S | 18.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ1_M.gguf) | i1-IQ1_M | 20.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ2_S.gguf) | i1-IQ2_S | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.6 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q2_K.gguf) | i1-Q2_K | 32.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 33.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 36.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 37.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ3_S.gguf) | i1-IQ3_S | 38.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ3_M.gguf) | i1-IQ3_M | 39.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 42.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 46.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 47.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q4_0.gguf) | i1-Q4_0 | 49.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 49.9 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 52.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 60.5 | |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 62.1 | |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v6-90b-i1-GGUF/resolve/main/BigWeave-v6-90b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 72.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LHRuig/brdptt2 | LHRuig | 2024-11-27T02:48:32Z | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-27T02:48:14Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: >-
images/michael-kors-blue-performance-stretch-slim-fit-wedding-suit-coat.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: brdp
---
# brdptt2
<Gallery />
## Model description
brd ptt lora
## Trigger words
You should use `brdp` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/brdptt2/tree/main) them in the Files & versions tab.
|
NyanDoggo/Qwen2.5-Coder-7B-Instruct-Spider-Baseline-GGUF | NyanDoggo | 2024-11-27T02:46:17Z | 24 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-27T02:04:37Z | ---
license: apache-2.0
---
|
PrunaAI/wisenut-nlp-team-Wisedom-8B-bnb-8bit-smashed | PrunaAI | 2024-11-27T02:44:42Z | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:wisenut-nlp-team/Wisedom-8B",
"base_model:quantized:wisenut-nlp-team/Wisedom-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-27T02:35:51Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: wisenut-nlp-team/Wisedom-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo wisenut-nlp-team/Wisedom-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/wisenut-nlp-team-Wisedom-8B-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("wisenut-nlp-team/Wisedom-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model wisenut-nlp-team/Wisedom-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
TinyFish-cn/Mistral-Nemo-pixiv-novel_Q8_0 | TinyFish-cn | 2024-11-27T02:40:24Z | 291 | 1 | null | [
"gguf",
"mistral",
"dataset:Orion-zhen/tagged-pixiv-novel",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:quantized:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-27T02:10:46Z | ---
license: apache-2.0
datasets:
- Orion-zhen/tagged-pixiv-novel
base_model:
- unsloth/Mistral-Nemo-Base-2407
--- |
xabackus/sexism-detector-Spanish-8842e-4001 | xabackus | 2024-11-27T02:39:15Z | 180 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T02:28:55Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8842e-4001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8842e-4001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4671
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5179 | 1.0 | 225 | 0.6030 | 0.8246 | 0.7453 |
| 0.4884 | 2.0 | 450 | 0.4784 | 0.8246 | 0.7453 |
| 0.4628 | 3.0 | 675 | 0.4677 | 0.8246 | 0.7453 |
| 0.4588 | 4.0 | 900 | 0.4671 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
shanearora/i-am-a-good-big-instruct-model | shanearora | 2024-11-27T02:29:59Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"olmo_1124",
"text-generation",
"conversational",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"arxiv:2411.15124",
"base_model:allenai/OLMo-2-1124-13B-DPO",
"base_model:quantized:allenai/OLMo-2-1124-13B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T02:22:44Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
base_model:
- allenai/OLMo-2-1124-13B-DPO
library_name: transformers
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
---
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
# OLMo-2-1124-13B-Instruct
OLMo-2 13B Instruct November 2024 is post-trained variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM-MATH-IF-Mixed-Constraints).
Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
Check out the OLMo 2 paper (forthcoming) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
The core models released in this batch include the following:
| **Stage** | **OLMo-2 7B** | **OLMo 2 13B** |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| **Base Model** | [allenai/OLMo2-7B-1124](https://huggingface.co/allenai/OLMo2-7B-1124) | [allenai/OLMo-2-13B-1124](https://huggingface.co/allenai/OLMo-2-13B-1124) |
| **SFT** | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
| **DPO** | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
| **Final Models (RLVR)** | [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) | [allenai/OLMo-2-1124-13B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-13B-Instruct) |
| **Reward Model (RM)**| [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) | (Same as 8B) |
## Model description
- **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** allenai/OLMo-2-13B-1124-DPO
### Model Sources
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/olmes
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** Coming soon!
- **Demo:** https://playground.allenai.org/
## Using the model
### Loading with HuggingFace
To load the model with HuggingFace, use the following snippet:
```
from transformers import AutoModelForCausalLM
olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-13B-Instruct")
```
### Chat template
The chat template for our models is formatted as:
```
<|endoftext|><|user|>\nHow are you doing?\n<|assistant|>\nI'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
Or with new lines expanded:
```
<|endoftext|><|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
### System prompt
In Ai2 demos, we use this system prompt by default:
```
You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
```
The model has not been trained with a specific system prompt in mind.
### Bias, Risks, and Limitations
The OLMo 2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
See the Falcon 180B model card for an example of this.
## Performance
| Model | Average | AlpacaEval | BBH | DROP | GSM8k | IFEval | MATH | MMLU | Safety | PopQA | TruthQA |
|-------|---------|------------|-----|------|--------|---------|------|-------|---------|-------|---------|
| **Open weights models** |
| Gemma-2-9B-it | 51.9 | 43.7 | 2.5 | 58.8 | 79.7 | 69.9 | 29.8 | 69.1 | 75.5 | 28.3 | 61.4 |
| Ministral-8B-Instruct | 52.1 | 31.4 | 56.2 | 56.2 | 80.0 | 56.4 | 40.0 | 68.5 | 56.2 | 20.2 | 55.5 |
| Mistral-Nemo-Instruct-2407 | 51.1 | 45.8 | 56.0 | 23.6 | 81.4 | 64.5 | 31.9 | 70.0 | 52.7 | 26.9 | 57.7 |
| Qwen-2.5-7B-Instruct | 57.1 | 29.7 | 25.3 | 54.4 | 83.8 | 74.7 | 69.9 | 76.6 | 75.0 | 18.1 | 63.1 |
| Llama-3.1-8B-Instruct | 58.9 | 25.8 | 69.7 | 61.7 | 83.4 | 80.6 | 42.5 | 71.3 | 70.2 | 28.4 | 55.1 |
| Tülu 3 8B | 60.4 | 34.0 | 66.0 | 62.6 | 87.6 | 82.4 | 43.7 | 68.2 | 75.4 | 29.1 | 55.0 |
| Qwen-2.5-14B-Instruct | 61.0 | 34.6 | 35.4 | 50.5 | 83.9 | 82.4 | 70.6 | 81.1 | 79.3 | 21.1 | 70.8 |
| **Fully open models** |
| OLMo-7B-Instruct | 28.2 | 5.2 | 35.3 | 30.7 | 14.3 | 32.2 | 2.1 | 46.3 | 54.0 | 17.1 | 44.5 |
| OLMo-7B-0424-Instruct | 33.2 | 8.5 | 35.2 | 47.9 | 23.2 | 39.2 | 5.2 | 48.9 | 49.3 | 18.9 | 55.2 |
| OLMoE-1B-7B-0924-Instruct | 35.5 | 8.5 | 37.2 | 34.3 | 47.2 | 46.2 | 8.4 | 51.6 | 51.6 | 20.6 | 49.1 |
| MAP-Neo-7B-Instruct | 42.9 | 17.6 | 26.4 | 48.2 | 69.4 | 35.9 | 31.5 | 56.5 | 73.7 | 18.4 | 51.6 |
| *OLMo-2-7B-SFT* | 50.0 | 9.3 | 50.7 | 58.2 | 71.2 | 68.0 | 25.1 | 62.0 | 82.4 | 25.0 | 47.8 |
| *OLMo-2-7B-DPO* | 55.0 | 29.9 | 47.0 | 58.8 | 82.4 | 74.5 | 31.2 | 63.4 | 81.5 | 24.5 | 57.2 |
| *OLMo-2-13B-SFT* | 55.7 | 12.0 | 58.8 | 71.8 | 75.7 | 71.5 | 31.1 | 67.3 | 82.8 | 29.3 | 56.2 |
| *OLMo-2-13B-DPO* | 61.0 | 38.3 | 58.5 | 71.9 | 84.2 | 80.6 | 35.0 | 68.5 | 80.6 | 28.9 | 63.9 |
| **OLMo-2-7B-1124–Instruct** | 55.7 | 31.0 | 48.9 | 58.9 | 85.2 | 75.6 | 31.3 | 63.9 | 81.2 | 24.6 | 56.3 |
| **OLMo-2-13B-1124-Instruct** | 61.4 | 37.5 | 58.4 | 72.1 | 87.4 | 80.4 | 39.7 | 68.6 | 77.5 | 28.8 | 63.9 |
## Hyperparameters
PPO settings for RLVR:
- **Learning Rate**: 4 × 10⁻⁷
- **Discount Factor (gamma)**: 1.0
- **General Advantage Estimation (lambda)**: 0.95
- **Mini-batches (N_mb)**: 1
- **PPO Update Iterations (K)**: 4
- **PPO's Clipping Coefficient (epsilon)**: 0.2
- **Value Function Coefficient (c1)**: 0.1
- **Gradient Norm Threshold**: 1.0
- **Learning Rate Schedule**: Linear
- **Generation Temperature**: 1.0
- **Batch Size (effective)**: 512
- **Max Token Length**: 2,048
- **Max Prompt Token Length**: 2,048
- **Penalty Reward Value for Responses without an EOS Token**: -10.0
- **Response Length**: 2,048
- **Total Episodes**: 100,000 (this checkpoint is training step 360)
- **KL penalty coefficient (beta)**: 0.03
- **Warm up ratio (omega)**: 0.0
## License and use
OLMo 2 is licensed under the Apache 2.0 license.
OLMo 2 is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
This model has been fine-tuned using a dataset mix with outputs generated from third party models and are subject to additional terms: [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
## Citation
A technical manuscript is forthcoming! |
saintsauce/roberta-base_finetuned_model_lr_3e-05_second_run | saintsauce | 2024-11-27T02:29:21Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T02:28:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saintsauce/roberta-base_finetuned_model_lr_2e-05_second_run | saintsauce | 2024-11-27T02:23:48Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T02:23:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yosefw/llama-3.2-180m-amharic-instruct-dpo | yosefw | 2024-11-27T02:23:43Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:rasyosef/Llama-3.2-180M-Amharic-Instruct",
"base_model:finetune:rasyosef/Llama-3.2-180M-Amharic-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T00:35:09Z | ---
base_model: rasyosef/Llama-3.2-180M-Amharic-Instruct
library_name: transformers
model_name: llama-3.2-180m-amharic-instruct-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-3.2-180m-amharic-instruct-dpo
This model is a fine-tuned version of [rasyosef/Llama-3.2-180M-Amharic-Instruct](https://huggingface.co/rasyosef/Llama-3.2-180M-Amharic-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yosefw/llama-3.2-180m-amharic-instruct-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>]()
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.1.2
- Datasets: 3.1.0
- Tokenizers: 0.20.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nalsil/results | nalsil | 2024-11-27T02:13:08Z | 7 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"region:us"
] | null | 2024-11-27T02:12:15Z | ---
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5306 | 1.0 | 1250 | 0.5369 | 0.837 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.5.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
TinyFish-cn/Mistral-Nemo-pixiv-novel | TinyFish-cn | 2024-11-27T02:09:22Z | 79 | 2 | null | [
"gguf",
"mistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T05:33:03Z | ---
license: apache-2.0
---
|
xabackus/sexism-detector-Spanish-8842e-6001 | xabackus | 2024-11-27T02:03:45Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T01:12:20Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8842e-6001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8842e-6001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4871
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4743 | 1.0 | 225 | 0.4816 | 0.8246 | 0.7453 |
| 0.4602 | 2.0 | 450 | 0.4574 | 0.8246 | 0.7453 |
| 0.4479 | 3.0 | 675 | 0.4804 | 0.8246 | 0.7453 |
| 0.4558 | 4.0 | 900 | 0.4871 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
NyanDoggo/Qwen2.5-Coder-7B-Instruct-Spider-Reasoning-GGUF | NyanDoggo | 2024-11-27T01:53:14Z | 40 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-27T01:24:24Z | ---
license: apache-2.0
---
|
Fishfishfishfishfish/OLMo-2-1124-7B-Instruct | Fishfishfishfishfish | 2024-11-27T01:45:55Z | 86 | 1 | null | [
"gguf",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:quantized:allenai/OLMo-2-1124-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T21:09:39Z | ---
license: apache-2.0
base_model:
- allenai/OLMo-2-1124-7B-Instruct
--- |
Jstefanski/results | Jstefanski | 2024-11-27T01:38:06Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T01:37:34Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.8036 | 0.96 | 15 | 7.7901 |
| 7.198 | 1.92 | 30 | 7.7419 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cvapict/distilbert-base-multilingual-cased-aoe-test12 | cvapict | 2024-11-27T01:26:19Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T01:25:49Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-aoe-test12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-aoe-test12
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Accuracy: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1129 | 1.0 | 353 | 0.1182 | 0.9555 |
| 0.1319 | 2.0 | 706 | 0.1165 | 0.9571 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF | mradermacher | 2024-11-27T01:14:13Z | 57 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"ko",
"base_model:Edentns/DataVortexS-10.7B-v1.0",
"base_model:quantized:Edentns/DataVortexS-10.7B-v1.0",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-26T21:11:28Z | ---
base_model: Edentns/DataVortexS-10.7B-v1.0
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Edentns/DataVortexS-10.7B-v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF/resolve/main/DataVortexS-10.7B-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hZzy/qwen2.5-0.5b-expo-DPO-EXPERIMENT-10-5e6 | hZzy | 2024-11-27T01:10:29Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/train_pairwise",
"base_model:hZzy/qwen2.5-0.5b-sft-news-IFT",
"base_model:finetune:hZzy/qwen2.5-0.5b-sft-news-IFT",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T21:02:20Z | ---
license: apache-2.0
base_model: hZzy/qwen2.5-0.5b-sft-news-IFT
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
- trl
- expo
- generated_from_trainer
datasets:
- hZzy/train_pairwise
model-index:
- name: qwen2.5-0.5b-expo-DPO-EXPERIMENT-10-5e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/5jlf70he)
# qwen2.5-0.5b-expo-DPO-EXPERIMENT-10-5e6
This model is a fine-tuned version of [hZzy/qwen2.5-0.5b-sft-news-IFT](https://huggingface.co/hZzy/qwen2.5-0.5b-sft-news-IFT) on the hZzy/train_pairwise dataset.
It achieves the following results on the evaluation set:
- Loss: 15.2566
- Logps: -80.3981
- Logits: -1.0046
- Objective: 15.1445
- Dpo Loss: 15.1445
- Regularize: 15.1445
- Ranking Simple: 0.5134
- Ranking Idealized: 0.5093
- Ranking Idealized Expo: 0.5093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 12
- total_train_batch_size: 288
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Logps | Logits | Objective | Dpo Loss | Regularize | Ranking Simple | Ranking Idealized | Ranking Idealized Expo |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------:|:---------:|:--------:|:----------:|:--------------:|:-----------------:|:----------------------:|
| 9.5723 | 0.2834 | 50 | 9.2586 | -89.6862 | -1.4979 | 9.6501 | 9.6501 | 9.6501 | 0.5134 | 0.5093 | 0.5093 |
| 9.8364 | 0.5668 | 100 | 15.5453 | -79.4201 | -1.3475 | 15.5409 | 15.5409 | 15.5409 | 0.5176 | 0.5093 | 0.5093 |
| 8.8451 | 0.8503 | 150 | 16.6626 | -82.1459 | -1.1122 | 16.5626 | 16.5626 | 16.5626 | 0.5145 | 0.5093 | 0.5093 |
| 3.8083 | 1.1337 | 200 | 16.0519 | -81.6751 | -1.0874 | 16.3240 | 16.3240 | 16.3240 | 0.5186 | 0.5093 | 0.5093 |
| 3.6019 | 1.4171 | 250 | 15.8144 | -81.5609 | -0.9933 | 15.7679 | 15.7679 | 15.7679 | 0.5176 | 0.5093 | 0.5093 |
| 2.1682 | 1.7005 | 300 | 15.3824 | -80.3329 | -1.0036 | 15.2004 | 15.2004 | 15.2004 | 0.5114 | 0.5093 | 0.5093 |
| 2.703 | 1.9839 | 350 | 15.2566 | -80.3981 | -1.0046 | 15.1445 | 15.1445 | 15.1445 | 0.5134 | 0.5093 | 0.5093 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
autogluon/tabpfn-mix-1.0-classifier | autogluon | 2024-11-27T01:09:58Z | 34,559 | 11 | null | [
"safetensors",
"tabular-classification",
"arxiv:2003.06505",
"arxiv:2207.01848",
"arxiv:2405.13396",
"license:apache-2.0",
"region:us"
] | tabular-classification | 2024-11-22T22:32:14Z | ---
license: apache-2.0
pipeline_tag: tabular-classification
---
# TabPFNMix Classifier
TabPFNMix classifier is a tabular foundation model that is pre-trained on purely synthetic datasets sampled from a mix of random classifiers.
## Architecture
TabPFNMix is based on a 12-layer encoder-decoder Transformer of 37 M parameters. We use a pre-training strategy incorporating in-context learning, similar to that used by TabPFN and TabForestPFN.
## Usage
To use TabPFNMix classifier, install AutoGluon by running:
```sh
pip install autogluon
```
A minimal example showing how to perform fine-tuning and inference using the TabPFNMix classifier:
```python
import pandas as pd
from autogluon.tabular import TabularPredictor
if __name__ == '__main__':
train_data = pd.read_csv('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
subsample_size = 5000
if subsample_size is not None and subsample_size < len(train_data):
train_data = train_data.sample(n=subsample_size, random_state=0)
test_data = pd.read_csv('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
tabpfnmix_default = {
"model_path_classifier": "autogluon/tabpfn-mix-1.0-classifier",
"model_path_regressor": "autogluon/tabpfn-mix-1.0-regressor",
"n_ensembles": 1,
"max_epochs": 30,
}
hyperparameters = {
"TABPFNMIX": [
tabpfnmix_default,
],
}
label = "class"
predictor = TabularPredictor(label=label)
predictor = predictor.fit(
train_data=train_data,
hyperparameters=hyperparameters,
verbosity=3,
)
predictor.leaderboard(test_data, display=True)
```
## Citation
If you find TabPFNMix useful for your research, please consider citing the associated papers:
```
@article{erickson2020autogluon,
title={Autogluon-tabular: Robust and accurate automl for structured data},
author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2003.06505},
year={2020}
}
@article{hollmann2022tabpfn,
title={Tabpfn: A transformer that solves small tabular classification problems in a second},
author={Hollmann, Noah and M{\"u}ller, Samuel and Eggensperger, Katharina and Hutter, Frank},
journal={arXiv preprint arXiv:2207.01848},
year={2022}
}
@article{breejen2024context,
title={Why In-Context Learning Transformers are Tabular Data Classifiers},
author={Breejen, Felix den and Bae, Sangmin and Cha, Stephen and Yun, Se-Young},
journal={arXiv preprint arXiv:2405.13396},
year={2024}
}
```
## License
This project is licensed under the Apache-2.0 License.
|
Ellbendls/Qwen-2.5-3b-Text_to_SQL | Ellbendls | 2024-11-27T01:03:40Z | 302 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:gretelai/synthetic_text_to_sql",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T00:29:40Z | ---
library_name: transformers
license: mit
datasets:
- gretelai/synthetic_text_to_sql
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
---
# Fine-Tuned LLM for Text-to-SQL Conversion
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) designed to convert natural language queries into SQL statements. It was trained on the `gretelai/synthetic_text_to_sql` dataset and can provide both SQL queries and table schema context when needed.
---
## Model Details
### Model Description
This model has been fine-tuned to help users generate SQL queries based on natural language prompts. In scenarios where table schema context is missing, the model is trained to generate schema definitions along with the SQL query, making it a robust solution for various Text-to-SQL tasks.
- **Base Model:** [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- **Dataset:** [Gretel AI Synthetic Text-to-SQL Dataset](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
- **Language:** English
- **License:** MIT
### Key Features
1. **Text-to-SQL Conversion:** Converts natural language queries into accurate SQL statements.
2. **Schema Generation:** Generates table schema context when none is provided.
3. **Optimized for Analytics and Reporting:** Handles SQL queries with aggregation, grouping, and filtering.
---
## Usage
### Direct Use
To use the model for text-to-SQL conversion, you can load it using the `transformers` library as shown below:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Text_to_SQL")
model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Text_to_SQL")
# Input prompt
query = "What is the total number of hospital beds in each state?"
# Tokenize input and generate output
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
# Decode and print
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Example Output
Input:
`What is the total number of hospital beds in each state?`
Output:
```sql
Context:
CREATE TABLE Beds (State VARCHAR(50), Beds INT);
INSERT INTO Beds (State, Beds) VALUES ('California', 100000), ('Texas', 85000), ('New York', 70000);
SQL Query:
SELECT State, SUM(Beds) FROM Beds GROUP BY State;
```
---
## Training Details
### Dataset
The model was fine-tuned on the `gretelai/synthetic_text_to_sql` dataset, which includes diverse natural language queries mapped to SQL queries, with optional schema contexts.
## Limitations
1. **Complex Queries:** May struggle with highly nested or advanced SQL tasks.
2. **Non-English Prompts:** Optimized for English only.
3. **Context Dependence:** May generate incorrect schemas without explicit instructions.
|
Ellbendls/Qwen-2.5-3b-Text_to_SQL-GGUF | Ellbendls | 2024-11-27T01:03:05Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"dataset:gretelai/synthetic_text_to_sql",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-27T00:55:35Z | ---
library_name: transformers
license: mit
datasets:
- gretelai/synthetic_text_to_sql
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
---
# Fine-Tuned LLM for Text-to-SQL Conversion
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) designed to convert natural language queries into SQL statements. It was trained on the `gretelai/synthetic_text_to_sql` dataset and can provide both SQL queries and table schema context when needed.
---
## Model Details
### Model Description
This model has been fine-tuned to help users generate SQL queries based on natural language prompts. In scenarios where table schema context is missing, the model is trained to generate schema definitions along with the SQL query, making it a robust solution for various Text-to-SQL tasks.
- **Base Model:** [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- **Dataset:** [Gretel AI Synthetic Text-to-SQL Dataset](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
- **Language:** English
- **License:** MIT
### Key Features
1. **Text-to-SQL Conversion:** Converts natural language queries into accurate SQL statements.
2. **Schema Generation:** Generates table schema context when none is provided.
3. **Optimized for Analytics and Reporting:** Handles SQL queries with aggregation, grouping, and filtering.
---
## Usage
### Direct Use
To use the model for text-to-SQL conversion, you can load it using the `transformers` library as shown below:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Text_to_SQL-GGUF")
model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Text_to_SQL-GGUF")
# Input prompt
query = "What is the total number of hospital beds in each state?"
# Tokenize input and generate output
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
# Decode and print
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Example Output
Input:
`What is the total number of hospital beds in each state?`
Output:
```sql
Context:
CREATE TABLE Beds (State VARCHAR(50), Beds INT);
INSERT INTO Beds (State, Beds) VALUES ('California', 100000), ('Texas', 85000), ('New York', 70000);
SQL Query:
SELECT State, SUM(Beds) FROM Beds GROUP BY State;
```
---
## Training Details
### Dataset
The model was fine-tuned on the `gretelai/synthetic_text_to_sql` dataset, which includes diverse natural language queries mapped to SQL queries, with optional schema contexts.
## Limitations
1. **Complex Queries:** May struggle with highly nested or advanced SQL tasks.
2. **Non-English Prompts:** Optimized for English only.
3. **Context Dependence:** May generate incorrect schemas without explicit instructions.
|
John6666/agenda-mix-pdxl-v15-sdxl | John6666 | 2024-11-27T01:02:53Z | 68 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-02T09:14:32Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/434919/agenda-mix-pdxl?modelVersionId=613794).
The author is [here](https://huggingface.co/EarthnDusk).
This model created by [duskfallcrew](https://civitai.com/models/434919?modelVersionId=1062373).
|
normankier/results | normankier | 2024-11-27T01:02:44Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-27T01:02:05Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
xabackus/sexism-detector-Spanish-8832e-6001 | xabackus | 2024-11-27T01:01:50Z | 165 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T00:53:34Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8832e-6001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8832e-6001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4705
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4956 | 1.0 | 225 | 0.4886 | 0.8246 | 0.7453 |
| 0.4603 | 2.0 | 450 | 0.4689 | 0.8246 | 0.7453 |
| 0.4463 | 3.0 | 675 | 0.4705 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
DavesArmoury/block_test | DavesArmoury | 2024-11-27T01:00:24Z | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-11-27T01:00:15Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
naresh810/gpt2-law | naresh810 | 2024-11-27T00:53:05Z | 149 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T00:53:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xabackus/sexism-detector-Spanish-8852e-5001 | xabackus | 2024-11-27T00:50:36Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T00:37:51Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8852e-5001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8852e-5001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4718
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4876 | 1.0 | 225 | 0.5032 | 0.8246 | 0.7453 |
| 0.4739 | 2.0 | 450 | 0.4775 | 0.8246 | 0.7453 |
| 0.4604 | 3.0 | 675 | 0.4746 | 0.8246 | 0.7453 |
| 0.4614 | 4.0 | 900 | 0.4668 | 0.8246 | 0.7453 |
| 0.4561 | 5.0 | 1125 | 0.4718 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cvapict/distilbert-base-multilingual-cased-aoe-test9-ratio1to2 | cvapict | 2024-11-27T00:45:31Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T00:44:55Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-aoe-test9-ratio1to2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-aoe-test9-ratio1to2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3266
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3447 | 1.0 | 220 | 0.3311 | 0.8580 |
| 0.2438 | 2.0 | 440 | 0.3038 | 0.8807 |
| 0.2101 | 3.0 | 660 | 0.3266 | 0.8852 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Jennny/llama3_8b_sft_ultrafb | Jennny | 2024-11-27T00:43:41Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T21:27:02Z | ---
base_model: meta-llama/Llama-3.1-8B
datasets:
- allenai/ultrafeedback_binarized_cleaned
library_name: transformers
model_name: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for meta-llama/Llama-3.1-8B
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the [['allenai/ultrafeedback_binarized_cleaned']](https://huggingface.co/datasets/['allenai/ultrafeedback_binarized_cleaned']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jennny/llama3_8b_sft_ultrafb", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jenny-shen/huggingface/runs/hm26rzoo)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NyanDoggo/Qwen2.5-Coder-3B-Instruct-Spider-Reasoning-GGUF | NyanDoggo | 2024-11-27T00:41:35Z | 52 | 0 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T23:58:50Z | ---
license: apache-2.0
---
|
xabackus/sexism-detector-Spanish-8842e-5001 | xabackus | 2024-11-27T00:35:25Z | 178 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T00:25:04Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8842e-5001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8842e-5001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4707
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.496 | 1.0 | 225 | 0.5406 | 0.8246 | 0.7453 |
| 0.4782 | 2.0 | 450 | 0.4728 | 0.8246 | 0.7453 |
| 0.4598 | 3.0 | 675 | 0.4718 | 0.8246 | 0.7453 |
| 0.459 | 4.0 | 900 | 0.4707 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
NyanDoggo/Qwen2.5-Coder-3B-Instruct-Spider-Reasoning | NyanDoggo | 2024-11-27T00:31:39Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T23:56:28Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
aidadev48/aidav8 | aidadev48 | 2024-11-27T00:22:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-27T00:16:03Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aidadev48
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xabackus/sexism-detector-Spanish-2212e-5001 | xabackus | 2024-11-27T00:13:59Z | 180 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-27T00:03:31Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-2212e-5001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-2212e-5001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7775 | 1.0 | 900 | 0.8560 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
OscarNav/flan-gpt2-medium-distill_V2 | OscarNav | 2024-11-27T00:11:48Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-08T07:45:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RylanSchaeffer/collapse_gemma-2-27b_hs2_accumulate_iter3_sftsd2 | RylanSchaeffer | 2024-11-27T00:10:54Z | 8 | 0 | null | [
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-27b",
"base_model:finetune:google/gemma-2-27b",
"license:gemma",
"region:us"
] | null | 2024-11-27T00:02:56Z | ---
license: gemma
base_model: google/gemma-2-27b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: collapse_gemma-2-27b_hs2_accumulate_iter3_sftsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# collapse_gemma-2-27b_hs2_accumulate_iter3_sftsd2
This model is a fine-tuned version of [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9331
- Num Input Tokens Seen: 13190464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.1282 | 0 |
| 2.3244 | 0.0184 | 5 | 1.0518 | 240912 |
| 2.2442 | 0.0368 | 10 | 0.9933 | 480908 |
| 2.1347 | 0.0551 | 15 | 0.9797 | 713948 |
| 2.0779 | 0.0735 | 20 | 0.9788 | 953808 |
| 1.6988 | 0.0919 | 25 | 0.9776 | 1202776 |
| 1.6197 | 0.1103 | 30 | 0.9794 | 1447736 |
| 1.5939 | 0.1286 | 35 | 0.9787 | 1694460 |
| 1.391 | 0.1470 | 40 | 0.9787 | 1934204 |
| 1.1954 | 0.1654 | 45 | 0.9771 | 2171112 |
| 1.1232 | 0.1838 | 50 | 0.9747 | 2409548 |
| 1.1961 | 0.2022 | 55 | 0.9722 | 2648484 |
| 0.9664 | 0.2205 | 60 | 0.9710 | 2887652 |
| 1.1064 | 0.2389 | 65 | 0.9667 | 3127516 |
| 1.0085 | 0.2573 | 70 | 0.9611 | 3368304 |
| 0.8056 | 0.2757 | 75 | 0.9606 | 3603000 |
| 0.9106 | 0.2941 | 80 | 0.9576 | 3850976 |
| 0.9384 | 0.3124 | 85 | 0.9544 | 4094752 |
| 0.8953 | 0.3308 | 90 | 0.9521 | 4345860 |
| 0.8928 | 0.3492 | 95 | 0.9511 | 4588756 |
| 0.7887 | 0.3676 | 100 | 0.9490 | 4837704 |
| 0.9092 | 0.3859 | 105 | 0.9497 | 5078112 |
| 0.7458 | 0.4043 | 110 | 0.9471 | 5318968 |
| 0.762 | 0.4227 | 115 | 0.9463 | 5556324 |
| 0.8916 | 0.4411 | 120 | 0.9436 | 5803288 |
| 0.791 | 0.4595 | 125 | 0.9442 | 6042868 |
| 0.9366 | 0.4778 | 130 | 0.9417 | 6282932 |
| 0.8494 | 0.4962 | 135 | 0.9418 | 6522180 |
| 1.0078 | 0.5146 | 140 | 0.9399 | 6773624 |
| 0.9159 | 0.5330 | 145 | 0.9380 | 7011976 |
| 1.0115 | 0.5513 | 150 | 0.9390 | 7257008 |
| 0.84 | 0.5697 | 155 | 0.9380 | 7501580 |
| 0.8987 | 0.5881 | 160 | 0.9393 | 7742124 |
| 0.9589 | 0.6065 | 165 | 0.9370 | 7981768 |
| 0.8201 | 0.6249 | 170 | 0.9371 | 8222304 |
| 0.7601 | 0.6432 | 175 | 0.9348 | 8469856 |
| 0.7465 | 0.6616 | 180 | 0.9378 | 8710912 |
| 0.8689 | 0.6800 | 185 | 0.9381 | 8949132 |
| 0.6945 | 0.6984 | 190 | 0.9343 | 9196744 |
| 0.7289 | 0.7167 | 195 | 0.9358 | 9434412 |
| 0.583 | 0.7351 | 200 | 0.9336 | 9677156 |
| 0.6272 | 0.7535 | 205 | 0.9356 | 9916792 |
| 0.7919 | 0.7719 | 210 | 0.9353 | 10162084 |
| 0.9377 | 0.7903 | 215 | 0.9334 | 10403240 |
| 0.7397 | 0.8086 | 220 | 0.9330 | 10650280 |
| 0.6871 | 0.8270 | 225 | 0.9342 | 10885396 |
| 0.9175 | 0.8454 | 230 | 0.9339 | 11138056 |
| 0.621 | 0.8638 | 235 | 0.9336 | 11382612 |
| 0.8007 | 0.8822 | 240 | 0.9324 | 11620516 |
| 0.691 | 0.9005 | 245 | 0.9353 | 11865444 |
| 0.7516 | 0.9189 | 250 | 0.9329 | 12109276 |
| 0.9474 | 0.9373 | 255 | 0.9326 | 12346224 |
| 0.7389 | 0.9557 | 260 | 0.9335 | 12594020 |
| 0.7986 | 0.9740 | 265 | 0.9310 | 12844164 |
| 0.9011 | 0.9924 | 270 | 0.9335 | 13090264 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/MFANNv0.25-GGUF | mradermacher | 2024-11-27T00:06:59Z | 249 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:netcat420/MFANN",
"base_model:netcat420/MFANNv0.25",
"base_model:quantized:netcat420/MFANNv0.25",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T16:06:39Z | ---
base_model: netcat420/MFANNv0.25
datasets:
- netcat420/MFANN
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/netcat420/MFANNv0.25
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MFANNv0.25-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MFANNv0.25-GGUF/resolve/main/MFANNv0.25.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yosefw/llama-3.2-180m-amharic-instruct-apo-2 | yosefw | 2024-11-27T00:01:17Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:rasyosef/Llama-3.2-180M-Amharic-Instruct",
"base_model:finetune:rasyosef/Llama-3.2-180M-Amharic-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T19:29:13Z | ---
base_model: rasyosef/Llama-3.2-180M-Amharic-Instruct
library_name: transformers
model_name: llama-3.2-180m-amharic-instruct-apo-2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-3.2-180m-amharic-instruct-apo-2
This model is a fine-tuned version of [rasyosef/Llama-3.2-180M-Amharic-Instruct](https://huggingface.co/rasyosef/Llama-3.2-180M-Amharic-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yosefw/llama-3.2-180m-amharic-instruct-apo-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>]()
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.1.2
- Datasets: 3.1.0
- Tokenizers: 0.20.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JesusAura999/senik-v1 | JesusAura999 | 2024-11-26T23:48:39Z | 14 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T23:47:07Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JesusAura999
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xabackus/sexism-detector-Spanish-8812e-5001 | xabackus | 2024-11-26T23:46:40Z | 181 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T23:43:04Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sexism-detector-Spanish-8812e-5001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sexism-detector-Spanish-8812e-5001
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4860
- Accuracy: 0.8246
- F1: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4953 | 1.0 | 225 | 0.4860 | 0.8246 | 0.7453 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
danliu1226/PLM-interact-35M-humanV11 | danliu1226 | 2024-11-26T23:43:50Z | 5 | 0 | null | [
"pytorch",
"safetensors",
"protein-protein interactions",
"paired proteins encoding",
"protein language model",
"license:mit",
"region:us"
] | null | 2024-11-06T22:50:28Z | ---
license: mit
tags:
- protein-protein interactions
- paired proteins encoding
- protein language model
---
This model is trained on human PPIs from (https://d-script.readthedocs.io/en/stable/data.html).
For more information about the model, see https://huggingface.co/danliu1226/PLM-interact-650M-humanV12. |
danliu1226/PLM-interact-650M-humanV11 | danliu1226 | 2024-11-26T23:39:36Z | 30 | 0 | null | [
"pytorch",
"safetensors",
"protein-protein interactions",
"paired proteins encoding",
"protein language model",
"license:mit",
"region:us"
] | null | 2024-11-06T13:02:51Z | ---
license: mit
tags:
- protein-protein interactions
- paired proteins encoding
- protein language model
---
# PLM-interact model
PLM-interact: extending protein language models to predict protein-protein interactions
The preprint is available at [PLM-interact](https://www.biorxiv.org/content/10.1101/2024.11.05.622169v1) and the code see [github link](https://github.com/liudan111/PLM-interact)
This model is trained on human PPIs from STRING V12. For the PPI preprocessing details, see Methods
in the preprint.
## Model description
PLM-interact, goes beyond a single protein, jointly encoding protein pairs to learn their relationships,
analogous to the next-sentence prediction task from natural language processing. This approach provides
a significant improvement in performance: Trained on human-human PPIs, PLM-interact predicts mouse, fly,
worm, E. coli and yeast PPIs, with 16-28% improvements in AUPR compared with state-of-the-art PPI models.
Additionally, it can detect changes that disrupt or cause PPIs and be applied to virus-host PPI prediction.

### An example to predict interaction probability between proteins
```python
import torch
import torch.nn as nn
from transformers import AutoModel,AutoModelForMaskedLM,AutoTokenizer
import os
import torch.nn.functional as F
class PLMinteract(nn.Module):
def __init__(self,model_name,num_labels,embedding_size):
super(PLMinteract,self).__init__()
self.esm_mask = AutoModelForMaskedLM.from_pretrained(model_name)
self.embedding_size=embedding_size
self.classifier = nn.Linear(embedding_size,1) # embedding_size
self.num_labels=num_labels
def forward_test(self,features):
embedding_output = self.esm_mask.base_model(**features, return_dict=True)
embedding=embedding_output.last_hidden_state[:,0,:] #cls token
embedding = F.relu(embedding)
logits = self.classifier(embedding)
logits=logits.view(-1)
probability = torch.sigmoid(logits)
return probability
# folder_huggingface_download : the download model from huggingface, such as "danliu1226/PLM-interact-650M-humanV11"
# model_name: the ESM2 model that PLM-interact trained
# embedding_size: the embedding size of ESM2 model
folder_huggingface_download='download_huggingface_folder/'
model_name= 'facebook/esm2_t33_650M_UR50D'
embedding_size =1280
protein1 ="EGCVSNLMVCNLAYSGKLEELKESILADKSLATRTDQDSRTALHWACSAGHTEIVEFLLQLGVPVNDKDDAGWSPLHIAASAGRDEIVKALLGKGAQVNAVNQNGCTPLHYAASKNRHEIAVMLLEGGANPDAKDHYEATAMHRAAAKGNLKMIHILLYYKASTNIQDTEGNTPLHLACDEERVEEAKLLVSQGASIYIENKEEKTPLQVAKGGLGLILKRMVEG"
protein2= "MGQSQSGGHGPGGGKKDDKDKKKKYEPPVPTRVGKKKKKTKGPDAASKLPLVTPHTQCRLKLLKLERIKDYLLMEEEFIRNQEQMKPLEEKQEEERSKVDDLRGTPMSVGTLEEIIDDNHAIVSTSVGSEHYVSILSFVDKDLLEPGCSVLLNHKVHAVIGVLMDDTDPLVTVMKVEKAPQETYADIGGLDNQIQEIKESVELPLTHPEYYEEMGIKPPKGVILYGPPGTGKTLLAKAVANQTSATFLRVVGSELIQKYLGDGPKLVRELFRVAEEHAPSIVFIDEIDAIGTKRYDSNSGGEREIQRTMLELLNQLDGFDSRGDVKVIMATNRIETLDPALIRPGRIDRKIEFPLPDEKTKKRIFQIHTSRMTLADDVTLDDLIMAKDDLSGADIKAICTEAGLMALRERRMKVTNEDFKKSKENVLYKKQEGTPEGLYL"
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained(model_name)
PLMinter= PLMinteract(model_name, 1, embedding_size)
load_model = torch.load(f"{folder_huggingface_download}pytorch_model.bin")
PLMinter.load_state_dict(load_model)
texts=[protein1, protein2]
tokenized = tokenizer(*texts, padding=True, truncation='longest_first', return_tensors="pt", max_length=1603)
tokenized = tokenized.to(DEVICE)
PLMinter.eval()
PLMinter.to(DEVICE)
with torch.no_grad():
probability = PLMinter.forward_test(tokenized)
print(probability.item())
```
## Training dataset
This model checkpoint is trained on the benchmarking human PPIs from https://d-script.readthedocs.io/en/stable/data.html
|
danliu1226/PLM-interact-650M-humanV12 | danliu1226 | 2024-11-26T23:36:53Z | 13 | 0 | null | [
"pytorch",
"safetensors",
"protein-protein interactions",
"paired proteins encoding",
"protein language model",
"region:us"
] | null | 2024-11-06T23:31:17Z | ---
tags:
- protein-protein interactions
- paired proteins encoding
- protein language model
---
# PLM-interact model
PLM-interact: extending protein language models to predict protein-protein interactions
The preprint is available at [PLM-interact](https://www.biorxiv.org/content/10.1101/2024.11.05.622169v1) and the code see [github link](https://github.com/liudan111/PLM-interact)
This model is trained on human PPIs from STRING V12. For the PPI preprocessing details, see Methods
in the preprint.
## Model description
PLM-interact, goes beyond a single protein, jointly encoding protein pairs to learn their relationships,
analogous to the next-sentence prediction task from natural language processing. This approach provides
a significant improvement in performance: Trained on human-human PPIs, PLM-interact predicts mouse, fly,
worm, E. coli and yeast PPIs, with 16-28% improvements in AUPR compared with state-of-the-art PPI models.
Additionally, it can detect changes that disrupt or cause PPIs and be applied to virus-host PPI prediction.

### An example to predict interaction probability between proteins
```python
import torch
import torch.nn as nn
from transformers import AutoModel,AutoModelForMaskedLM,AutoTokenizer
import os
import torch.nn.functional as F
class PLMinteract(nn.Module):
def __init__(self,model_name,num_labels,embedding_size):
super(PLMinteract,self).__init__()
self.esm_mask = AutoModelForMaskedLM.from_pretrained(model_name)
self.embedding_size=embedding_size
self.classifier = nn.Linear(embedding_size,1) # embedding_size
self.num_labels=num_labels
def forward_test(self,features):
embedding_output = self.esm_mask.base_model(**features, return_dict=True)
embedding=embedding_output.last_hidden_state[:,0,:] #cls token
embedding = F.relu(embedding)
logits = self.classifier(embedding)
logits=logits.view(-1)
probability = torch.sigmoid(logits)
return probability
# folder_huggingface_download : the download model from huggingface, such as "danliu1226/PLM-interact-650M-humanV11"
# model_name: the ESM2 model that PLM-interact trained
# embedding_size: the embedding size of ESM2 model
folder_huggingface_download='download_huggingface_folder/'
model_name= 'facebook/esm2_t33_650M_UR50D'
embedding_size =1280
protein1 ="EGCVSNLMVCNLAYSGKLEELKESILADKSLATRTDQDSRTALHWACSAGHTEIVEFLLQLGVPVNDKDDAGWSPLHIAASAGRDEIVKALLGKGAQVNAVNQNGCTPLHYAASKNRHEIAVMLLEGGANPDAKDHYEATAMHRAAAKGNLKMIHILLYYKASTNIQDTEGNTPLHLACDEERVEEAKLLVSQGASIYIENKEEKTPLQVAKGGLGLILKRMVEG"
protein2= "MGQSQSGGHGPGGGKKDDKDKKKKYEPPVPTRVGKKKKKTKGPDAASKLPLVTPHTQCRLKLLKLERIKDYLLMEEEFIRNQEQMKPLEEKQEEERSKVDDLRGTPMSVGTLEEIIDDNHAIVSTSVGSEHYVSILSFVDKDLLEPGCSVLLNHKVHAVIGVLMDDTDPLVTVMKVEKAPQETYADIGGLDNQIQEIKESVELPLTHPEYYEEMGIKPPKGVILYGPPGTGKTLLAKAVANQTSATFLRVVGSELIQKYLGDGPKLVRELFRVAEEHAPSIVFIDEIDAIGTKRYDSNSGGEREIQRTMLELLNQLDGFDSRGDVKVIMATNRIETLDPALIRPGRIDRKIEFPLPDEKTKKRIFQIHTSRMTLADDVTLDDLIMAKDDLSGADIKAICTEAGLMALRERRMKVTNEDFKKSKENVLYKKQEGTPEGLYL"
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained(model_name)
PLMinter= PLMinteract(model_name, 1, embedding_size)
load_model = torch.load(f"{folder_huggingface_download}pytorch_model.bin")
PLMinter.load_state_dict(load_model)
texts=[protein1, protein2]
tokenized = tokenizer(*texts, padding=True, truncation='longest_first', return_tensors="pt", max_length=1603)
tokenized = tokenized.to(DEVICE)
PLMinter.eval()
PLMinter.to(DEVICE)
with torch.no_grad():
probability = PLMinter.forward_test(tokenized)
print(probability.item())
```
## Training data
Human PPIs from STRING V12
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed] |
cvapict/distilbert-base-multilingual-cased-aoe-test8 | cvapict | 2024-11-26T23:36:26Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T23:35:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-aoe-test8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-aoe-test8
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1767
- Accuracy: 0.942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2044 | 1.0 | 250 | 0.1663 | 0.935 |
| 0.0767 | 2.0 | 500 | 0.1555 | 0.939 |
| 0.0247 | 3.0 | 750 | 0.1767 | 0.942 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/CAG-13b-i1-GGUF | mradermacher | 2024-11-26T23:35:47Z | 41 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ruotong-pan/CAGB",
"base_model:ruotong-pan/CAG-13b",
"base_model:quantized:ruotong-pan/CAG-13b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-26T08:50:26Z | ---
base_model: ruotong-pan/CAG-13b
datasets:
- ruotong-pan/CAGB
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ruotong-pan/CAG-13b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CAG-13b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/CAG-13b-i1-GGUF/resolve/main/CAG-13b.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
saintsauce/distilbert-base-uncased_finetuned_model_lr_5e-05 | saintsauce | 2024-11-26T23:35:01Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T23:34:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kenken6696/Llama-3.2-3B_fix_tail | kenken6696 | 2024-11-26T23:28:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-12T23:25:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saintsauce/distilbert-base-uncased_finetuned_model_lr_3e-05 | saintsauce | 2024-11-26T23:20:53Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T23:20:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NyanDoggo/Qwen2.5-Coder-3B-Instruct-Spider-Baseline | NyanDoggo | 2024-11-26T23:15:55Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T16:13:41Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kenken6696/Llama-3.2-3B_fix_middle | kenken6696 | 2024-11-26T23:13:10Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T23:10:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
c2p-cmd/GPT2-Summarizer | c2p-cmd | 2024-11-26T23:11:24Z | 130 | 0 | transformers | [
"transformers",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"summarization",
"base_model:openai-community/gpt2",
"base_model:quantized:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2024-11-26T23:05:29Z | ---
license: mit
base_model:
- openai-community/gpt2
pipeline_tag: summarization
library_name: transformers
---
Fine-tuned version of GPT2 for summarization in pytorch and CoreML
|
saintsauce/distilbert-base-uncased_finetuned_model_lr_2e-05 | saintsauce | 2024-11-26T23:06:47Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T23:06:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
biustnaspust/kurde5 | biustnaspust | 2024-11-26T23:05:43Z | 42 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T23:01:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deivism/bert-finetuned-ner | deivism | 2024-11-26T23:01:13Z | 29 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T22:36:10Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0274
- Precision: 0.9550
- Recall: 0.9638
- F1: 0.9594
- Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 148 | 0.0305 | 0.8341 | 0.8789 | 0.8559 | 0.9934 |
| No log | 2.0 | 296 | 0.0215 | 0.8834 | 0.9355 | 0.9087 | 0.9953 |
| No log | 3.0 | 444 | 0.0195 | 0.9140 | 0.9435 | 0.9285 | 0.9961 |
| 0.0655 | 4.0 | 592 | 0.0195 | 0.9282 | 0.9498 | 0.9389 | 0.9964 |
| 0.0655 | 5.0 | 740 | 0.0203 | 0.9177 | 0.9539 | 0.9355 | 0.9962 |
| 0.0655 | 6.0 | 888 | 0.0201 | 0.9401 | 0.9552 | 0.9475 | 0.9966 |
| 0.0056 | 7.0 | 1036 | 0.0200 | 0.9355 | 0.9535 | 0.9444 | 0.9968 |
| 0.0056 | 8.0 | 1184 | 0.0208 | 0.9393 | 0.9569 | 0.9480 | 0.9967 |
| 0.0056 | 9.0 | 1332 | 0.0215 | 0.9380 | 0.9549 | 0.9464 | 0.9968 |
| 0.0056 | 10.0 | 1480 | 0.0232 | 0.9188 | 0.9582 | 0.9381 | 0.9960 |
| 0.0024 | 11.0 | 1628 | 0.0212 | 0.9334 | 0.9554 | 0.9442 | 0.9967 |
| 0.0024 | 12.0 | 1776 | 0.0223 | 0.9383 | 0.9598 | 0.9489 | 0.9968 |
| 0.0024 | 13.0 | 1924 | 0.0225 | 0.9394 | 0.9542 | 0.9468 | 0.9967 |
| 0.0012 | 14.0 | 2072 | 0.0232 | 0.9415 | 0.9560 | 0.9487 | 0.9968 |
| 0.0012 | 15.0 | 2220 | 0.0238 | 0.9413 | 0.9580 | 0.9496 | 0.9967 |
| 0.0012 | 16.0 | 2368 | 0.0239 | 0.9396 | 0.9582 | 0.9488 | 0.9966 |
| 0.001 | 17.0 | 2516 | 0.0230 | 0.9328 | 0.9563 | 0.9444 | 0.9966 |
| 0.001 | 18.0 | 2664 | 0.0243 | 0.9342 | 0.9577 | 0.9458 | 0.9966 |
| 0.001 | 19.0 | 2812 | 0.0246 | 0.9423 | 0.9576 | 0.9499 | 0.9969 |
| 0.001 | 20.0 | 2960 | 0.0240 | 0.9355 | 0.9576 | 0.9464 | 0.9967 |
| 0.0006 | 21.0 | 3108 | 0.0241 | 0.9477 | 0.9599 | 0.9538 | 0.9970 |
| 0.0006 | 22.0 | 3256 | 0.0236 | 0.9443 | 0.9569 | 0.9505 | 0.9968 |
| 0.0006 | 23.0 | 3404 | 0.0244 | 0.9461 | 0.9578 | 0.9519 | 0.9969 |
| 0.0006 | 24.0 | 3552 | 0.0248 | 0.9417 | 0.96 | 0.9508 | 0.9969 |
| 0.0006 | 25.0 | 3700 | 0.0246 | 0.9336 | 0.9590 | 0.9461 | 0.9966 |
| 0.0006 | 26.0 | 3848 | 0.0236 | 0.9421 | 0.9589 | 0.9504 | 0.9968 |
| 0.0006 | 27.0 | 3996 | 0.0244 | 0.9441 | 0.9612 | 0.9526 | 0.9969 |
| 0.0004 | 28.0 | 4144 | 0.0250 | 0.9462 | 0.9594 | 0.9528 | 0.9969 |
| 0.0004 | 29.0 | 4292 | 0.0249 | 0.9430 | 0.9622 | 0.9525 | 0.9969 |
| 0.0004 | 30.0 | 4440 | 0.0252 | 0.9439 | 0.9612 | 0.9525 | 0.9969 |
| 0.0003 | 31.0 | 4588 | 0.0253 | 0.9480 | 0.9552 | 0.9515 | 0.9968 |
| 0.0003 | 32.0 | 4736 | 0.0229 | 0.9484 | 0.9619 | 0.9551 | 0.9969 |
| 0.0003 | 33.0 | 4884 | 0.0235 | 0.9485 | 0.9608 | 0.9546 | 0.9970 |
| 0.0003 | 34.0 | 5032 | 0.0247 | 0.9438 | 0.9611 | 0.9524 | 0.9969 |
| 0.0003 | 35.0 | 5180 | 0.0248 | 0.9481 | 0.9598 | 0.9539 | 0.9970 |
| 0.0003 | 36.0 | 5328 | 0.0245 | 0.9441 | 0.9621 | 0.9530 | 0.9969 |
| 0.0003 | 37.0 | 5476 | 0.0255 | 0.9417 | 0.9602 | 0.9508 | 0.9967 |
| 0.0002 | 38.0 | 5624 | 0.0255 | 0.9416 | 0.9595 | 0.9505 | 0.9969 |
| 0.0002 | 39.0 | 5772 | 0.0246 | 0.9524 | 0.9611 | 0.9567 | 0.9971 |
| 0.0002 | 40.0 | 5920 | 0.0254 | 0.9435 | 0.9611 | 0.9522 | 0.9969 |
| 0.0003 | 41.0 | 6068 | 0.0252 | 0.9386 | 0.9608 | 0.9496 | 0.9966 |
| 0.0003 | 42.0 | 6216 | 0.0257 | 0.9385 | 0.9601 | 0.9492 | 0.9968 |
| 0.0003 | 43.0 | 6364 | 0.0251 | 0.9491 | 0.9591 | 0.9541 | 0.9970 |
| 0.0002 | 44.0 | 6512 | 0.0251 | 0.9448 | 0.9610 | 0.9528 | 0.9970 |
| 0.0002 | 45.0 | 6660 | 0.0252 | 0.9508 | 0.9622 | 0.9565 | 0.9972 |
| 0.0002 | 46.0 | 6808 | 0.0252 | 0.9486 | 0.9613 | 0.9549 | 0.9971 |
| 0.0002 | 47.0 | 6956 | 0.0262 | 0.9498 | 0.9618 | 0.9558 | 0.9971 |
| 0.0001 | 48.0 | 7104 | 0.0263 | 0.9520 | 0.9624 | 0.9572 | 0.9971 |
| 0.0001 | 49.0 | 7252 | 0.0263 | 0.9521 | 0.9624 | 0.9573 | 0.9971 |
| 0.0001 | 50.0 | 7400 | 0.0260 | 0.9526 | 0.9618 | 0.9572 | 0.9972 |
| 0.0001 | 51.0 | 7548 | 0.0248 | 0.9493 | 0.9634 | 0.9563 | 0.9971 |
| 0.0001 | 52.0 | 7696 | 0.0255 | 0.9502 | 0.9618 | 0.9560 | 0.9971 |
| 0.0001 | 53.0 | 7844 | 0.0258 | 0.9522 | 0.9617 | 0.9569 | 0.9972 |
| 0.0001 | 54.0 | 7992 | 0.0258 | 0.9481 | 0.9615 | 0.9548 | 0.9970 |
| 0.0001 | 55.0 | 8140 | 0.0251 | 0.9520 | 0.9617 | 0.9568 | 0.9972 |
| 0.0001 | 56.0 | 8288 | 0.0250 | 0.9509 | 0.9608 | 0.9558 | 0.9972 |
| 0.0001 | 57.0 | 8436 | 0.0260 | 0.9462 | 0.9601 | 0.9531 | 0.9972 |
| 0.0001 | 58.0 | 8584 | 0.0252 | 0.9563 | 0.9628 | 0.9595 | 0.9973 |
| 0.0001 | 59.0 | 8732 | 0.0247 | 0.9506 | 0.9624 | 0.9565 | 0.9972 |
| 0.0001 | 60.0 | 8880 | 0.0251 | 0.9510 | 0.9611 | 0.9560 | 0.9972 |
| 0.0001 | 61.0 | 9028 | 0.0255 | 0.9495 | 0.9614 | 0.9554 | 0.9972 |
| 0.0001 | 62.0 | 9176 | 0.0259 | 0.9537 | 0.9613 | 0.9575 | 0.9972 |
| 0.0001 | 63.0 | 9324 | 0.0259 | 0.9506 | 0.9609 | 0.9557 | 0.9972 |
| 0.0001 | 64.0 | 9472 | 0.0260 | 0.9544 | 0.9595 | 0.9569 | 0.9972 |
| 0.0 | 65.0 | 9620 | 0.0253 | 0.9511 | 0.9604 | 0.9557 | 0.9972 |
| 0.0 | 66.0 | 9768 | 0.0257 | 0.9526 | 0.9604 | 0.9565 | 0.9972 |
| 0.0 | 67.0 | 9916 | 0.0263 | 0.9528 | 0.9605 | 0.9566 | 0.9972 |
| 0.0 | 68.0 | 10064 | 0.0271 | 0.9544 | 0.9598 | 0.9571 | 0.9972 |
| 0.0 | 69.0 | 10212 | 0.0269 | 0.9530 | 0.9611 | 0.9571 | 0.9972 |
| 0.0 | 70.0 | 10360 | 0.0273 | 0.9514 | 0.9609 | 0.9561 | 0.9972 |
| 0.0 | 71.0 | 10508 | 0.0275 | 0.9535 | 0.9612 | 0.9573 | 0.9972 |
| 0.0 | 72.0 | 10656 | 0.0275 | 0.9524 | 0.9632 | 0.9578 | 0.9972 |
| 0.0 | 73.0 | 10804 | 0.0279 | 0.9537 | 0.9596 | 0.9566 | 0.9972 |
| 0.0 | 74.0 | 10952 | 0.0277 | 0.9475 | 0.9633 | 0.9554 | 0.9970 |
| 0.0 | 75.0 | 11100 | 0.0272 | 0.9537 | 0.9614 | 0.9575 | 0.9972 |
| 0.0 | 76.0 | 11248 | 0.0269 | 0.9541 | 0.9619 | 0.9580 | 0.9972 |
| 0.0 | 77.0 | 11396 | 0.0271 | 0.9552 | 0.9625 | 0.9588 | 0.9972 |
| 0.0 | 78.0 | 11544 | 0.0274 | 0.9457 | 0.9619 | 0.9537 | 0.9970 |
| 0.0 | 79.0 | 11692 | 0.0273 | 0.9524 | 0.9616 | 0.9570 | 0.9972 |
| 0.0 | 80.0 | 11840 | 0.0275 | 0.9530 | 0.9632 | 0.9581 | 0.9972 |
| 0.0 | 81.0 | 11988 | 0.0271 | 0.9496 | 0.9639 | 0.9567 | 0.9971 |
| 0.0 | 82.0 | 12136 | 0.0280 | 0.9537 | 0.9614 | 0.9575 | 0.9972 |
| 0.0 | 83.0 | 12284 | 0.0277 | 0.9499 | 0.9642 | 0.9570 | 0.9970 |
| 0.0 | 84.0 | 12432 | 0.0275 | 0.9517 | 0.9621 | 0.9569 | 0.9971 |
| 0.0 | 85.0 | 12580 | 0.0277 | 0.9524 | 0.9635 | 0.9579 | 0.9972 |
| 0.0 | 86.0 | 12728 | 0.0275 | 0.9517 | 0.9648 | 0.9582 | 0.9972 |
| 0.0 | 87.0 | 12876 | 0.0276 | 0.9519 | 0.9636 | 0.9577 | 0.9972 |
| 0.0 | 88.0 | 13024 | 0.0276 | 0.9541 | 0.9647 | 0.9594 | 0.9972 |
| 0.0 | 89.0 | 13172 | 0.0275 | 0.9500 | 0.9642 | 0.9571 | 0.9971 |
| 0.0 | 90.0 | 13320 | 0.0276 | 0.9532 | 0.9635 | 0.9584 | 0.9972 |
| 0.0 | 91.0 | 13468 | 0.0273 | 0.9542 | 0.9636 | 0.9589 | 0.9972 |
| 0.0 | 92.0 | 13616 | 0.0274 | 0.9541 | 0.9636 | 0.9588 | 0.9973 |
| 0.0 | 93.0 | 13764 | 0.0274 | 0.9552 | 0.9638 | 0.9595 | 0.9973 |
| 0.0 | 94.0 | 13912 | 0.0275 | 0.9547 | 0.9636 | 0.9591 | 0.9973 |
| 0.0 | 95.0 | 14060 | 0.0274 | 0.9557 | 0.9639 | 0.9598 | 0.9973 |
| 0.0 | 96.0 | 14208 | 0.0274 | 0.9548 | 0.9638 | 0.9593 | 0.9973 |
| 0.0 | 97.0 | 14356 | 0.0274 | 0.9550 | 0.9641 | 0.9595 | 0.9973 |
| 0.0 | 98.0 | 14504 | 0.0275 | 0.9552 | 0.9643 | 0.9597 | 0.9973 |
| 0.0 | 99.0 | 14652 | 0.0274 | 0.9549 | 0.9638 | 0.9593 | 0.9973 |
| 0.0 | 100.0 | 14800 | 0.0274 | 0.9550 | 0.9638 | 0.9594 | 0.9973 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF | mradermacher | 2024-11-26T23:00:09Z | 87 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CultriX/SeQwence-14B-EvolMergev1",
"base_model:quantized:CultriX/SeQwence-14B-EvolMergev1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-26T17:43:59Z | ---
base_model: CultriX/SeQwence-14B-EvolMergev1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/CultriX/SeQwence-14B-EvolMergev1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/SeQwence-14B-EvolMergev1-GGUF | mradermacher | 2024-11-26T22:55:48Z | 39 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CultriX/SeQwence-14B-EvolMergev1",
"base_model:quantized:CultriX/SeQwence-14B-EvolMergev1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T16:22:02Z | ---
base_model: CultriX/SeQwence-14B-EvolMergev1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/CultriX/SeQwence-14B-EvolMergev1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SeQwence-14B-EvolMergev1-GGUF/resolve/main/SeQwence-14B-EvolMergev1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
allenai/OLMo-2-1124-13B-GGUF | allenai | 2024-11-26T22:48:18Z | 1,691 | 2 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T06:39:34Z | ---
license: apache-2.0
---
GGUF version of https://huggingface.co/allenai/OLMo-2-1124-13B |
mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF | mradermacher | 2024-11-26T22:39:23Z | 16 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T16:06:58Z | ---
base_model: MrRobotoAI/Odin-v1.1-8b-FICTION-1024k
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MrRobotoAI/Odin-v1.1-8b-FICTION-1024k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.1-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.1-8b-FICTION-1024k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JuniperChinenye/zzzz4 | JuniperChinenye | 2024-11-26T22:37:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T22:34:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JuniperChinenye/zzzz3 | JuniperChinenye | 2024-11-26T22:34:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T22:31:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saintsauce/albert-base-v2_finetuned_model_lr_5e-05 | saintsauce | 2024-11-26T22:33:29Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T22:33:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF | KnutJaegersberg | 2024-11-26T22:25:01Z | 8 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-research-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-research-v0.4",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2024-11-26T22:24:36Z | ---
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
library_name: transformers
base_model: openGPT-X/Teuken-7B-instruct-research-v0.4
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF
This model was converted to GGUF format from [`openGPT-X/Teuken-7B-instruct-research-v0.4`](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KnutJaegersberg/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m-imat.gguf -c 2048
```
|
mradermacher/DataVortexS-10.7B-v1.0-GGUF | mradermacher | 2024-11-26T22:16:51Z | 83 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"ko",
"base_model:Edentns/DataVortexS-10.7B-v1.0",
"base_model:quantized:Edentns/DataVortexS-10.7B-v1.0",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-26T04:12:22Z | ---
base_model: Edentns/DataVortexS-10.7B-v1.0
language:
- ko
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Edentns/DataVortexS-10.7B-v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DataVortexS-10.7B-v1.0-GGUF/resolve/main/DataVortexS-10.7B-v1.0.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PrunaAI/ehristoforu-SoRu-0006-bnb-8bit-smashed | PrunaAI | 2024-11-26T22:12:22Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"pruna-ai",
"base_model:ehristoforu/SoRu-0006",
"base_model:quantized:ehristoforu/SoRu-0006",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-26T22:09:24Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ehristoforu/SoRu-0006
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ehristoforu/SoRu-0006 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/ehristoforu-SoRu-0006-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ehristoforu/SoRu-0006")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ehristoforu/SoRu-0006 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
ehristoforu/SoRu-0008 | ehristoforu | 2024-11-26T22:11:08Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:ehristoforu/SoRu-0007",
"base_model:finetune:ehristoforu/SoRu-0007",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T22:10:44Z | ---
base_model: ehristoforu/SoRu-0007
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** ehristoforu/SoRu-0007
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saintsauce/albert-base-v2_finetuned_model_lr_3e-05 | saintsauce | 2024-11-26T22:03:17Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T22:03:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KnutJaegersberg/Teuken-7B-instruct-research-v0.4-8.0bpw-exl2 | KnutJaegersberg | 2024-11-26T21:51:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"arxiv:2410.08800",
"arxiv:2309.11998",
"arxiv:2410.03730",
"arxiv:2410.08928",
"base_model:openGPT-X/Teuken-7B-base-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-base-v0.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-26T20:52:21Z | ---
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
library_name: transformers
base_model:
- openGPT-X/Teuken-7B-base-v0.4
license: other
---
# Model Card for Teuken-7B-instruct-research-v0.4
[Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned 7B parameter multilingual large language model (LLM) pre-trained with 4T tokens within the research project [OpenGPT-X](https://opengpt-x.de).
The base model Teuken-7B-base-v0.4 is available on request 📧 <a href="[email protected]">[email protected]</a>.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Fraunhofer, Forschungszentrum Jülich, TU Dresden, DFKI
- **Funded by:** German Federal Ministry of Economics and Climate Protection (BMWK) in the context of the OpenGPT-X project
- **Model type:** Transformer based decoder-only model
- **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
- **Shared by:** OpenGPT-X
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
[Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) focuses on covering all 24 EU languages and therefore renders more stable results across these languages and better reflects European values in its answers than English-centric models. It is therefore specialized for use in multilingual tasks.
Since the underlying base model is trained on all 24 EU languages, Teuken-7B-instruct-research-v0.4 is also intended for research use in these 24 languages.
## Disclaimer Toxic Content:
This Large Language Model (LLM) may generate content that is inappropriate, offensive, or harmful. While the dataset has been heavily filtered to minimize such outputs, the model may still produce text that is biased or toxic due to the large scale and diverse nature of the data.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model is not intended for use in math and coding tasks.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned version of Teuken-7B-base-v0.4 (base model is available on request 📧 <a href="[email protected]">[email protected]</a>) that is not completely free from biases and hallucinations.
## How to Get Started with the Model
## Usage
The model requires transformers, sentencepiece, and the torch library.
After installation, here's an example of how to use the model:
As this model is a fine-tuned model, it must be used with the provided prompt template. Using the model without the prompt template is not intended and is not recommended. The prompt template is defined as follows:
```python
user="Hi!"
lang_code = "DE"
system_messages={
"EN": "A chat between a human and an artificial intelligence assistant."
" The assistant gives helpful and polite answers to the human's questions.",
"DE": "Ein Gespräch zwischen einem Menschen und einem Assistenten mit künstlicher Intelligenz."
" Der Assistent gibt hilfreiche und höfliche Antworten auf die Fragen des Menschen.",
}
prompt = f"System: {system_messages[lang_code]}\nUser: {user}\nAssistant:"
```
The prompt template is also directly integrated in the Tokenizer and can be used as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "openGPT-X/Teuken-7B-instruct-research-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model = model.to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
messages = [{"role": "User", "content": "Hallo"}]
prompt_ids = tokenizer.apply_chat_template(messages, chat_template="DE", tokenize=True, add_generation_prompt=True, return_tensors="pt")
prediction = model.generate(
prompt_ids.to(model.device),
max_length=512,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
num_return_sequences=1,
)
prediction_text = tokenizer.decode(prediction[0].tolist())
print(prediction_text)
```
This example demonstrates how to load the model and tokenizer, prepare input, generate text, and print the result.
## Training Details
### Pre-Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) was pre-trained on 4 trillion tokens of data from publicly available sources.
The pretraining data has a cutoff of September 2023.
More information is available in our preprint ["Data Processing for the OpenGPT-X Model Family"](http://arxiv.org/abs/2410.08800).
### Instruction-Tuning Data
For the dataset composition, we used a selection of English and German datasets from which we sampled our final dataset with equal distribution between German and English, as shown in the following tables.
### English
* We only included a subsample of the OpenOrca dataset.
* For the LMSYS-Chat dataset, we selected only the high-quality criteria in [LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset](https://arxiv.org/abs/2309.11998), i.e., if the model answer stems from any of "GPT-3.5-turbo", "GPT-4", "Claude-1", "Claude-instant-1" or "Claude-2" and is English.
* To select instruction-tuning examples based on their quality, We calculated the reward scores of all English examples utilizing [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) (Apache-2.0 license)
For English data, we did the following steps for sample selection:
1. Add all multi-turn examples
2. Add entire `code_alpaca` dataset subset
3. Add entire `lmsys_chat_1m_high_quality_train_en` dataset subset
4. For the remaining dataset subsets (`open_orca`, `evol_instruct_143k`, `evol_instruct_70k`, `sharegpt_v3`, `ultrachat_200k`, `bactrianx_EN`), we add the samples with the highest reward scores so that each dataset subset contributes an equal amount of high-quality examples
| Dataset | Sample Count |
| ----------------------------------------------------- | ------------ |
| anon8231489123/ShareGPT_Vicuna_unfiltered | 37.6K |
| MBZUAI/Bactrian-X | 26.9K |
| Open-Orca/OpenOrca | 26.9K |
| WizardLM/WizardLM_evol_instruct_70k | 26.9K |
| WizardLM/WizardLM_evol_instruct_V2_196k | 26.8K |
| sahil2801/CodeAlpaca-20k | 12.1K |
| lmsys/lmsys-chat-1m | 11.2K |
| HuggingFaceH4/ultrachat_200k | 7.0K |
| **total** | **175,5K** |
### German
For German data we include the complete data sets from the given table:
| Dataset | Sample Count |
| ----------------------------------------------------------- | ------------ |
| MBZUAI/Bactrian-X DE | 63.7K |
| FreedomIntelligence/evol-instruct-deutsch | 55.9K |
| FreedomIntelligence/alpaca-gpt4-deutsch | 47.5K |
| FreedomIntelligence/sharegpt-deutsch | 5.8K |
| LeoLM/German_Songs | 943 |
| LeoLM/German_Poems | 378 |
| bjoernp/ultrachat_de | 909 |
| **total** | **175,13K** |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Instruction fined tuned version of Teuken-7B-base-v0.4.
More information regarding the pre-training are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, , bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Results on multilingual benchmarks for 21 European languages with instruction-tuned models
| Model | Avg. | EU21-ARC | EU21-HeSw | EU21-TQA | EU21-MMLU |
|--------------------------------|--------|----------|-----------|----------|-----------|
| Meta-Llama-3.1-8B-Instruct | **.563** | .563 | .579 | .532 | **.576** |
| Mistral-7B-Instruct-v0.3 | .527 | .530 | .538 | **.548** | .491 |
| Salamandra-7B-Instruct | .543 | **.595** | **.637** | .482 | .459 |
| Aya-23-8B | .485 | .475 | .535 | .476 | .455 |
| Occiglot-7B-eu5-Instruct | .475 | .484 | .519 | .471 | .428 |
| Pharia-1-LLM-7B-C-A | .417 | .396 | .438 | .469 | .366 |
| Bloomz-7B1 | .358 | .316 | .354 | .461 | .302 |
| **Teuken-7B-instruct-research-v0.4** | .543 | .581 | .624 | .543 | .425 |
More information regarding the quality of our translated benchmarks are available in our Evaluation preprint ["Towards Multilingual LLM Evaluation for European Languages"](https://arxiv.org/abs/2410.08928).
More evaluation results regarding Teuken-7B-instruct-research-v0.4 are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
The model was evaluated in 21 languages on ARC, GSM8K, HellaSwag, TruthfulQA, Translation and MMLU. Results can also be seen in the [European LLM Leaderboard](https://huggingface.co/spaces/openGPT-X/european-llm-leaderboard).
## Technical Specifications
### Model Architecture and Objective
| Hyper-Parameter | Value |
|----------------------------|----------|
| Training Objective | CLM |
| Activation Function | SwiGLU |
| Seq Length | 4096 |
| Position Embeddings | Rotary |
| Num Layers | 32 |
| Hidden Size | 4096 |
| FFN Hidden Size | 13440 |
| Num Attention Heads | 32 |
| Head Dim | 128 |
| Group Query Attention | yes |
| Num Query Groups | 2 |
| Normalization | RMSNorm |
| Learning rate | 3e-4 |
| Min learning rate | 3e-5 |
| Disable bias in linear | yes |
| Hidden dropout | 0.0 |
| Attention dropout | 0.0 |
| Optimizer | AdamW |
| Beta1 | 0.9 |
| Beta2 | 0.95 |
| Data-type | bf16 |
| Recompute-activations | yes |
| Distributed-optimizers | yes |
### Compute Infrastructure
We trained our models on JUWELS Booster which consists of 936 compute nodes, each equipped with 4 NVIDIA A100 GPUs. The GPUs are hosted by AMD EPYC Rome CPUs. The compute nodes are connected with HDR-200 InfiniBand in a DragonFly+ topology.
#### Hardware
The configuration of JUWELS Booster compute nodes is the following:
CPU: AMD EPYC 7402 processor; 2 sockets, 24 cores per socket, SMT-2 (total: 2×24×2 = 96 threads) in NPS-4 1 configuration
Memory: 512 GB DDR4-3200 RAM (of which at least 20 GB is taken by the system software stack, including the file system); 256 GB per socket; 8 memory channels per socket (2 channels per NUMA domain)
GPU: 4 × NVIDIA A100 Tensor Core GPU with 40 GB; connected via NVLink3 to each other
Network: 4 × Mellanox HDR200 InfiniBand ConnectX 6 (200 Gbit/s each), HCA
Periphery: CPU, GPU, and network adapter are connected via 2 PCIe Gen 4 switches with 16 PCIe lanes going to each device (CPU socket: 2×16 lanes). PCIe switches are configured in synthetic mode.
#### Software
[Megatron-LM](https://github.com/OpenGPTX/Megatron-LM)
**BibTeX:**
If you find our model useful in your research, please consider citing our [preprint](https://arxiv.org/abs/2410.03730):
```
@misc{ali2024teuken7bbaseteuken7binstructeuropean,
title={Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs},
author={Mehdi Ali and Michael Fromm and Klaudia Thellmann and Jan Ebert and Alexander Arno Weber and Richard Rutmann and Charvi Jain and Max Lübbering and Daniel Steinigen and Johannes Leveling and Katrin Klug and Jasper Schulze Buschhoff and Lena Jurkschat and Hammam Abdelwahab and Benny Jörg Stein and Karl-Heinz Sylla and Pavel Denisov and Nicolo' Brandizzi and Qasid Saleem and Anirban Bhowmick and Lennard Helmer and Chelsea John and Pedro Ortiz Suarez and Malte Ostendorff and Alex Jude and Lalith Manjunath and Samuel Weinbach and Carolin Penke and Oleg Filatov and Shima Asaadi and Fabio Barth and Rafet Sifa and Fabian Küch and Andreas Herten and René Jäkel and Georg Rehm and Stefan Kesselheim and Joachim Köhler and Nicolas Flores-Herr},
year={2024},
eprint={2410.03730},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03730},
}
```
# Team
## Data Team
Anirban Bhowmick (IAIS), Nicolo Brandizzi (IAIS), Lennard Helmer (IAIS), Benny Jörg Stein (IAIS), Karl-Heinz Sylla (IAIS), Pavel Denisov (IAIS), Qasid Saleem (IAIS), Johannes Leveling (IAIS), Hammam Abdelwahab (IAIS), Luzian Hahn (IIS), Farzad Naderi (IIS), Md Saiful Islam (IIS), Alexander Schwirjow (IIS), Pedro Ortiz Suarez (ex. DFKI), Malte Ostendorff (ex. DFKI)
## Model-Training Team
### Core contributors
Mehdi Ali (IAIS), Michael Fromm (IAIS), Jan Ebert (FZJ), Chelsea John (FZJ), Lena Jurkschat (TUD), Alexander Weber (IAIS)
### Contributors:
Richard Rutmann (IAIS), Daniel Steinigen (IAIS), Lalith Manjunath (TUD), Carolin Penke (FZJ)
## Evaluation Team
### Core contributors
Klaudia Thellmann (TUD), Alex Jude (IAIS), Jasper Buschhoff (IAIS)
### Contributors:
Shima Assadi (IIS), Fabio Barth (DFKI)
## Management
Joachim Köhler (IAIS), Nicolas Flores-Herr (IAIS), Stefan Kesselheim (FZJ), Andreas Herten (FZJ), Georg Rehm (DFKI), René Jäkel (TUD), Fabian Küch (IIS), Nicole Hildebrandt (IAIS), Ines Wendler (IAIS)
We believe that collaboration is key to overcome the aforementioned limitations and thereby strengthening the European GenAI landscape. Because of this, the team invites researchers, developers, and AI enthusiasts to join and engage through various platforms. A Discord server has been created for community collaboration, offering a space for discussions on technical details, ideas, and direct interaction with developers. Additionally, resources like research publications and a European LLM Leaderboard provide insights into Teuken-7B’s performance and technical aspects. The OpenGPT-X team encourages ongoing engagement and collaboration as the project evolves.
Key links:
Discord: OpenGPT-X [Discord server](https://discord.com/invite/RvdHpGMvB3)
Research Papers: OpenGPT-X News [Research Papers](https://opengpt-x.de/en/news-en/)
LLM Leaderboard: European LLM Leaderboard [LLM Leaderboard](https://huggingface.co/spaces/openGPT-X/european-llm-leaderboard)
<div class="hf-card">
<h2>Contact Information</h2>
<p>You can reach out to the following model card contact:</p>
<ul>
<li>
<a href="https://huggingface.co/openGPT-X" target="_blank">OpenGPT-X</a>
- <a href="[email protected]">[email protected]</a>
</li>
</ul>
</div> |
ehristoforu/SoRu-0004 | ehristoforu | 2024-11-26T21:49:48Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:ehristoforu/SoRu-0003",
"base_model:finetune:ehristoforu/SoRu-0003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T21:49:25Z | ---
base_model: ehristoforu/SoRu-0003
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** ehristoforu/SoRu-0003
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jeremierostan/WiLlamaII | jeremierostan | 2024-11-26T21:38:17Z | 136 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:jeremierostan/Fake_WiLlama",
"base_model:jeremierostan/shakespeare-llama",
"base_model:finetune:jeremierostan/shakespeare-llama",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T21:36:33Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: jeremierostan/shakespeare-llama
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- jeremierostan/Fake_WiLlama
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
ehristoforu/SoRu-0002 | ehristoforu | 2024-11-26T21:37:32Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:ehristoforu/SoRu-0001",
"base_model:finetune:ehristoforu/SoRu-0001",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T21:36:57Z | ---
base_model: ehristoforu/SoRu-0001
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** ehristoforu/SoRu-0001
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ehristoforu/SoRu-0001 | ehristoforu | 2024-11-26T21:31:09Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct",
"base_model:finetune:Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T21:21:51Z | ---
base_model: Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
datalab-to/surya_layout0 | datalab-to | 2024-11-26T21:25:58Z | 511,964 | 1 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T21:21:34Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
---
Layout model for [surya](https://www.github.com/VikParuchuri/surya) |
shachardon/mistral-7b-naturally-occurring-feedback-ft-kto | shachardon | 2024-11-26T21:14:08Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T21:07:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enikeev/Cotype-Nano-MLX | enikeev | 2024-11-26T21:07:03Z | 99 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"ru",
"en",
"base_model:MTSAIR/Cotype-Nano",
"base_model:finetune:MTSAIR/Cotype-Nano",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T20:58:39Z | ---
library_name: transformers
language:
- ru
- en
pipeline_tag: text-generation
license: other
license_name: apache-2.0
license_link: https://huggingface.co/MTSAIR/Cotype-Nano/blob/main/Apache%20License%20MTS%20AI.docx
base_model: MTSAIR/Cotype-Nano
tags:
- mlx
---
# enikeev/Cotype-Nano-MLX
The Model [enikeev/Cotype-Nano-MLX](https://huggingface.co/enikeev/Cotype-Nano-MLX) was
converted to MLX format from [MTSAIR/Cotype-Nano](https://huggingface.co/MTSAIR/Cotype-Nano)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("enikeev/Cotype-Nano-MLX")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF | mradermacher | 2024-11-26T21:00:09Z | 118 | 2 | transformers | [
"transformers",
"gguf",
"trl",
"orpo",
"en",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental",
"base_model:quantized:nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-26T15:19:03Z | ---
base_model: nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
datasets:
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- trl
- orpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Moderne-12B-FFT-experimental-i1-GGUF/resolve/main/Mistral-Nemo-Moderne-12B-FFT-experimental.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
saintsauce/roberta-base_finetuned_model_lr_5e-05 | saintsauce | 2024-11-26T20:58:21Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T20:57:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hZzy/qwen2.5-0.5b-expo-DPO-EXPERIMENT-100-5e6 | hZzy | 2024-11-26T20:53:43Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/train_pairwise",
"base_model:hZzy/qwen2.5-0.5b-sft-news-IFT",
"base_model:finetune:hZzy/qwen2.5-0.5b-sft-news-IFT",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T16:43:29Z | ---
license: apache-2.0
base_model: hZzy/qwen2.5-0.5b-sft-news-IFT
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
- trl
- expo
- generated_from_trainer
datasets:
- hZzy/train_pairwise
model-index:
- name: qwen2.5-0.5b-expo-DPO-EXPERIMENT-100-5e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/ptp2yd12)
# qwen2.5-0.5b-expo-DPO-EXPERIMENT-100-5e6
This model is a fine-tuned version of [hZzy/qwen2.5-0.5b-sft-news-IFT](https://huggingface.co/hZzy/qwen2.5-0.5b-sft-news-IFT) on the hZzy/train_pairwise dataset.
It achieves the following results on the evaluation set:
- Loss: 153.9577
- Logps: -79.3234
- Logits: -1.1891
- Objective: 152.3114
- Dpo Loss: 152.3114
- Regularize: 152.3114
- Ranking Simple: 0.5227
- Ranking Idealized: 0.5093
- Ranking Idealized Expo: 0.5093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 12
- total_train_batch_size: 288
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Logps | Logits | Objective | Dpo Loss | Regularize | Ranking Simple | Ranking Idealized | Ranking Idealized Expo |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------:|:---------:|:--------:|:----------:|:--------------:|:-----------------:|:----------------------:|
| 89.5677 | 0.2834 | 50 | 97.0098 | -93.4757 | -1.4670 | 103.5481 | 103.5481 | 103.5481 | 0.5072 | 0.5093 | 0.5093 |
| 102.7372 | 0.5668 | 100 | 164.4481 | -79.3850 | -1.4159 | 169.0837 | 169.0837 | 169.0837 | 0.5238 | 0.5093 | 0.5093 |
| 86.6457 | 0.8503 | 150 | 159.7297 | -80.3621 | -1.2164 | 155.2103 | 155.2103 | 155.2103 | 0.5279 | 0.5093 | 0.5093 |
| 40.1205 | 1.1337 | 200 | 164.8019 | -78.8446 | -1.1758 | 161.0171 | 161.0171 | 161.0171 | 0.5248 | 0.5093 | 0.5093 |
| 40.2475 | 1.4171 | 250 | 156.8958 | -80.0693 | -1.2420 | 156.9776 | 156.9776 | 156.9776 | 0.5279 | 0.5093 | 0.5093 |
| 24.0056 | 1.7005 | 300 | 154.3221 | -79.4678 | -1.1971 | 153.7111 | 153.7111 | 153.7111 | 0.5238 | 0.5093 | 0.5093 |
| 25.1496 | 1.9839 | 350 | 153.9577 | -79.3234 | -1.1891 | 152.3116 | 152.3116 | 152.3116 | 0.5227 | 0.5093 | 0.5093 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
marwaALzaabi/plant-identification-vit | marwaALzaabi | 2024-11-26T20:52:55Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-large-patch16-224-in21k",
"base_model:finetune:google/vit-large-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-26T11:35:17Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-large-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: plant-identification-vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant-identification-vit
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0315
- Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0085 | 1.0 | 953 | 1.0659 | 0.7762 |
| 0.6805 | 2.0 | 1906 | 0.8413 | 0.8029 |
| 0.5039 | 3.0 | 2859 | 0.7920 | 0.8069 |
| 0.3847 | 4.0 | 3812 | 0.7760 | 0.8102 |
| 0.2826 | 5.0 | 4765 | 0.8024 | 0.8049 |
| 0.2229 | 6.0 | 5718 | 0.8382 | 0.8099 |
| 0.1064 | 7.0 | 6671 | 0.8983 | 0.8074 |
| 0.0676 | 8.0 | 7624 | 0.9672 | 0.8072 |
| 0.027 | 9.0 | 8577 | 1.0089 | 0.8099 |
| 0.0209 | 10.0 | 9530 | 1.0315 | 0.8096 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mahmoudOmar03/AIC_1_2 | mahmoudOmar03 | 2024-11-26T20:49:40Z | 68 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-23T15:03:51Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mahmoudOmar03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mechabruh/retrained_model | Mechabruh | 2024-11-26T20:45:48Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-26T10:45:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DeL-TaiseiOzaki/Tengentoppa-llm-jp-13B-base | DeL-TaiseiOzaki | 2024-11-26T20:45:15Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"en",
"base_model:llm-jp/llm-jp-3-13b",
"base_model:finetune:llm-jp/llm-jp-3-13b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T18:38:38Z | ---
license: apache-2.0
language:
- ja
- en
base_model:
- llm-jp/llm-jp-3-13b
pipeline_tag: text-generation
library_name: transformers
---
# Enhanced LLM-JP Model with Extended Tokenizer and Chat Template
This is an enhanced version of [llm-jp-13B](https://huggingface.co/llm-jp-13B) with an extended tokenizer that includes additional special tokens for structured conversations and advanced prompting.

## Model Information
- Base Model: [llm-jp-13B](https://huggingface.co/llm-jp-13B)
- Added Features: Extended tokenizer with special tokens for structured conversations and chat template
- Vocabulary Size: Extended from the base model
## Special Tokens
### Basic Tokens
- UNK Token: `{token_config.unk_token}`
- BOS Token: `{token_config.bos_token}`
- EOS Token: `{token_config.eos_token}`
- PAD Token: `{token_config.pad_token}`
- CLS Token: `{token_config.cls_token}`
- SEP Token: `{token_config.sep_token}`
- MASK Token: `{token_config.mask_token}`
### Conversation Structure Tokens
- System: `{token_config.system_token}` and `{token_config.system_end_token}`
- User: `{token_config.user_token}` and `{token_config.user_end_token}`
- Assistant: `{token_config.assistant_token}` and `{token_config.assistant_end_token}`
### Reasoning Process Tokens
- Reasoning: `{token_config.reasoning_token}` and `{token_config.reasoning_end_token}`
- Solution: `{token_config.solution_token}` and `{token_config.solution_end_token}`
- Response: `{token_config.response_token}` and `{token_config.response_end_token}`
### Hint and Supplementary Information Tokens
- Hint: `{token_config.hint_token}` and `{token_config.hint_end_token}`
- Note: `{token_config.note_token}` and `{token_config.note_end_token}`
- Context: `{token_config.context_token}` and `{token_config.context_end_token}`
- Reference: `{token_config.reference_token}` and `{token_config.reference_end_token}`
- Example: `{token_config.example_token}` and `{token_config.example_end_token}`
### Control Tokens
- Important: `{token_config.important_token}` and `{token_config.important_end_token}`
- Warning: `{token_config.warning_token}` and `{token_config.warning_end_token}`
- Error: `{token_config.error_token}` and `{token_config.error_end_token}`
## Chat Template Usage
このモデルは以下の役割(roles)をサポートしています:
- system: システムプロンプト用
- user: ユーザーの入力用
- hint: ヒントやガイダンス用
- reasoning: 推論プロセス用
- assistant: アシスタントの応答用
### Basic Usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("{model_name}")
tokenizer = AutoTokenizer.from_pretrained("{model_name}")
# チャット形式での使用例
messages = [
{
"role": "system",
"content": "あなたは親切で有能なAIアシスタントです。"
},
{
"role": "user",
"content": "次の数学の問題を解いてください:2x + 3 = 7"
},
{
"role": "hint",
"content": "方程式を解くときは、まず両辺から数を移項することを考えてみましょう。"
},
{
"role": "reasoning",
"content": "この方程式を解くために以下のステップで考えます:\\n1. 3を両辺から引く\\n2. 両辺を2で割る"
},
{
"role": "assistant",
"content": "x = 2 が方程式の解です。"
}
]
# チャットテンプレートを使用してメッセージを整形
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
print("\\nGenerated prompt:\\n", prompt)
# トークン化と推論
inputs = tokenizer(prompt, return_tensors="pt", max_length=2048, truncation=True)
outputs = model.generate(**inputs, max_length=2048, temperature=0.7)
response = tokenizer.decode(outputs[0])
print("\\nModel response:\\n", response)
```
### Advanced Usage:
# カスタムシステムメッセージを使用
messages = [
{
"role": "system",
"content": "あなたは数学の専門家です。"
},
{
"role": "user",
"content": "二次方程式 x² - 4x + 4 = 0 を解いてください。"
}
]
# 生成プロンプトを追加せずにテンプレートを適用
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
# 手動でヒントを追加
prompt += "\\n<|HINT|>因数分解を使うと簡単に解けるかもしれません。</|HINT|>"
# 手動で推論プロセスを追加
prompt += "\\n<|REASONING|>1. この式は(x-2)²の形に似ています\\n2. 実際に展開すると同じ式になります</|REASONING|>"
# アシスタントの応答用のプロンプトを追加
prompt += "\\n<|ASSISTANT|>"
# 以降は通常通り処理
inputs = tokenizer(prompt, return_tensors="pt", max_length=2048, truncation=True)
```
## Chat Template Specification
モデルのチャットテンプレートは以下の要素を含みます:
- 5つの異なるロール(system, user, hint, reasoning, assistant)
- 各ロールに対応する特殊トークン
- デフォルトのシステムメッセージ
- 柔軟なテンプレート構造
特徴:
- メッセージの順序は保持されます
- 各ロールは明確に区別されます
- システムメッセージは任意です
- ヒントと推論は必要に応じて追加できます
## Additional Notes
### トークナイザーの拡張について
- 元のトークナイザーの全機能を保持
- 新しい特殊トークンの追加による機能拡張
- チャットテンプレートによる構造化された会話のサポート
### 使用上の注意
- 特殊トークンは必要な場合にのみ使用してください
- チャットテンプレートは柔軟に調整可能です
- システムメッセージは対話の文脈に応じてカスタマイズできます
|
PrunaAI/MrRobotoAI-Freyja-v4.95-StoryGen-7b-NON-FICTION-bnb-8bit-smashed | PrunaAI | 2024-11-26T20:42:46Z | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-26T20:33:23Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: MrRobotoAI/Freyja-v4.95-StoryGen-7b-NON-FICTION
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo MrRobotoAI/Freyja-v4.95-StoryGen-7b-NON-FICTION installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/MrRobotoAI-Freyja-v4.95-StoryGen-7b-NON-FICTION-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("MrRobotoAI/Freyja-v4.95-StoryGen-7b-NON-FICTION")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model MrRobotoAI/Freyja-v4.95-StoryGen-7b-NON-FICTION before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
peter198477/fantasy_girls | peter198477 | 2024-11-26T20:36:54Z | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-26T20:35:54Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/workspace_trainsamples_800456207595858981_1057d678-087c-4204-9331-489efa825494.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: fantasy
---
# rkj
<Gallery />
## Trigger words
You should use `fantasy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/peter198477/fantasy_girls/tree/main) them in the Files & versions tab.
|
BigHuggyD/TheDrummer_Behemoth-123B-v2.1_exl2_5.0bpw_h6 | BigHuggyD | 2024-11-26T20:36:12Z | 8 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"5-bit",
"exl2",
"region:us"
] | null | 2024-11-26T20:29:51Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v2.1 🦣
> Nothing in the void is foreign to us. The place we go is the place we belong.

## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v2.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v2.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v2.1-GGUF (recommended for smaller quants)
## Description
Behemoth v2.x is a finetune of the new Largestral 2411 with system prompt support. Testers have noted that **everything** felt improved.
### Usage
Testers say this frankenformat maximizes the model's potential: **Metharme** with Mistral's new system tokens
- `[SYSTEM_PROMPT] <|system|>{{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
- `<|system|>[SYSTEM_PROMPT] {{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
*Take note that the opening system tag SHOULD ALWAYS have a leading whitespace after it.*
Complete SillyTavern Settings in BeaverAI Club: https://discord.com/channels/1238219753324281886/1309968730301792370/1309968730301792370
### Versions
- [v2.0](https://huggingface.co/TheDrummer/Behemoth-123B-v2) is equivalent to Behemoth v1.0 (Classic)
- [v2.1](https://huggingface.co/TheDrummer/Behemoth-123B-v2.1) is equivalent to Behemoth v1.1 (Creative Boost)
- [v2.2](https://huggingface.co/TheDrummer/Behemoth-123B-v2.2) is an improvement of Behemoth v2.1 (Creative++)
## Special Thanks
Thank you to each and everyone who donated/subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) 🙇 I hope to never disappoint!
```
Toasty Pigeon
theguywhogamesalot
Grozi
F
Marinara
Ko-fi Supporter
Grozi
Phaelon
ONTHEREDTEAM
EvarinSharath'fe(USM-Valor)
Silva
Dakkidaze
AlexTheVP
Pseudo
Kistara
Dr. Fjut
Grozi 🥈
KinjiHakari777
dustywintr
Syd
HumbleConsumer
Syd
Ko-fi Supporter
Arkamist
joe 🥇
Toad
Lied
Konnect
Kistara
Grozi 🥉
SleepDeprived3
Luigi
Nestor
```
https://ko-fi.com/thedrummer/leaderboard
```
Finetuned by yours truly,
Drummer
```

|
Esmarguz/restaurants-reviews | Esmarguz | 2024-11-26T20:32:40Z | 157 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T19:59:34Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: restaurants-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# restaurants-reviews
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3579
- Model Preparation Time: 0.0034
- Accuracy: 0.1818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:|
| No log | 1.0 | 6 | 2.3591 | 0.0034 | 0.1818 |
| 2.1236 | 2.0 | 12 | 2.3392 | 0.0034 | 0.2727 |
| 2.1236 | 3.0 | 18 | 2.3579 | 0.0034 | 0.1818 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
AliSaadatV/LoRA_esm2_t33_650M_UR50D-finetunedv2-TRANSMEM | AliSaadatV | 2024-11-26T20:29:58Z | 8 | 0 | peft | [
"peft",
"safetensors",
"esm",
"arxiv:1910.09700",
"base_model:facebook/esm2_t33_650M_UR50D",
"base_model:adapter:facebook/esm2_t33_650M_UR50D",
"region:us"
] | null | 2024-11-26T19:59:48Z | ---
base_model: facebook/esm2_t33_650M_UR50D
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF | mradermacher | 2024-11-26T20:29:27Z | 92 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:netcat420/MFANN-Llama3.1-Abliterated-SLERP-V5",
"base_model:quantized:netcat420/MFANN-Llama3.1-Abliterated-SLERP-V5",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-26T17:15:27Z | ---
base_model: netcat420/MFANN-Llama3.1-Abliterated-SLERP-V5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/netcat420/MFANN-Llama3.1-Abliterated-SLERP-V5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MFANN-Llama3.1-Abliterated-SLERP-V5-i1-GGUF/resolve/main/MFANN-Llama3.1-Abliterated-SLERP-V5.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
asif-anwar/byt5-tangail-ipa | asif-anwar | 2024-11-26T20:28:07Z | 5 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T20:20:42Z | ---
license: apache-2.0
---
|
Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF | Triangle104 | 2024-11-26T20:28:06Z | 18 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-11-26T19:57:17Z | ---
base_model: knifeayumu/Cydonia-v1.3-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.3-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier (again)
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B but uses TheDrummer/Cydonia-22B-v1.3 as the base. Yes, MortalWombat. I'm gonna use your parameters as long as I can!
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the SLERP merge method.
Models Merged
-
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.3
anthracite-org/magnum-v4-22b
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.3
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.3
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q8_0.gguf -c 2048
```
|
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter3_sftsd0 | RylanSchaeffer | 2024-11-26T20:21:01Z | 9 | 0 | null | [
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-27b",
"base_model:finetune:google/gemma-2-27b",
"license:gemma",
"region:us"
] | null | 2024-11-26T20:10:28Z | ---
license: gemma
base_model: google/gemma-2-27b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: collapse_gemma-2-27b_hs2_replace_iter3_sftsd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# collapse_gemma-2-27b_hs2_replace_iter3_sftsd0
This model is a fine-tuned version of [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3653
- Num Input Tokens Seen: 3955416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.1282 | 0 |
| 3.8489 | 0.0583 | 5 | 1.0535 | 228936 |
| 3.3414 | 0.1165 | 10 | 1.1298 | 463812 |
| 2.8437 | 0.1748 | 15 | 1.1488 | 702592 |
| 1.9341 | 0.2331 | 20 | 1.2179 | 938224 |
| 1.1621 | 0.2913 | 25 | 1.2570 | 1165920 |
| 0.6806 | 0.3496 | 30 | 1.2791 | 1403276 |
| 0.6728 | 0.4079 | 35 | 1.2535 | 1650592 |
| 0.5266 | 0.4661 | 40 | 1.2409 | 1880524 |
| 0.5377 | 0.5244 | 45 | 1.2414 | 2104356 |
| 0.4042 | 0.5827 | 50 | 1.2466 | 2335700 |
| 0.7168 | 0.6409 | 55 | 1.2873 | 2564852 |
| 0.3333 | 0.6992 | 60 | 1.3003 | 2791324 |
| 0.5753 | 0.7575 | 65 | 1.3164 | 3032688 |
| 0.3997 | 0.8157 | 70 | 1.3235 | 3267132 |
| 0.3566 | 0.8740 | 75 | 1.3464 | 3502604 |
| 0.4565 | 0.9323 | 80 | 1.3853 | 3727432 |
| 0.1841 | 0.9905 | 85 | 1.3653 | 3955416 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Subsets and Splits