modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qminh369/Cross-Encoder-LLamaIndex-Demo
|
qminh369
| 2024-04-18T12:45:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-18T12:45:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HachiML/Swallow-MS-7b-v0.1-ChatSkill-LAB-Evo-v0.3
|
HachiML
| 2024-04-18T12:45:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T12:40:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
michaelw37/sc31
|
michaelw37
| 2024-04-18T12:40:18Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-16T14:22:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tb2pi-persistent/Llama-2-7b-chat-hf-tb2pi-peft-v7
|
tb2pi-persistent
| 2024-04-18T12:40:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T12:39:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-1b_ian-022_IMDB_n-its-3
|
AlignmentResearch
| 2024-04-18T12:32:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:finetune:EleutherAI/pythia-1b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-18T12:31:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b
model-index:
- name: robust_llm_pythia-1b_ian-022_IMDB_n-its-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-1b_ian-022_IMDB_n-its-3
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-14m_ian-022_PasswordMatch_n-its-10
|
AlignmentResearch
| 2024-04-18T12:26:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-18T12:26:26Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-14m_ian-022_PasswordMatch_n-its-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_ian-022_PasswordMatch_n-its-10
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rwr20/rwr_taxiv3
|
rwr20
| 2024-04-18T12:22:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-18T12:21:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: rwr_taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rwr20/rwr_taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nilz1999/Llama-2-7b-multi-label-ft
|
nilz1999
| 2024-04-18T12:13:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T12:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ziaddBou/pneumodoc-model
|
ziaddBou
| 2024-04-18T12:09:00Z | 3 | 0 |
keras
|
[
"keras",
"deep-learning",
"medical-imaging",
"computer-vision",
"pneumonia-detection",
"region:us"
] | null | 2024-04-17T20:17:33Z |
---
title: Pneumonia Detection from X-ray Images
emoji: 🏥
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: "3.0"
python_version: "3.10"
suggested_hardware: "cpu-upgrade"
app_file: "./app.py"
fullWidth: true
header: mini
short_description: " model to detect pneumonia from chest X-ray images."
tags:
- deep-learning
- medical-imaging
- computer-vision
- pneumonia-detection
thumbnail: "URL_to_thumbnail_image"
---
## Model Description
This model employs a MobileNetV3 architecture fine-tuned for the detection of pneumonia from chest X-ray images. It is designed to assist radiologists by providing a preliminary automated diagnosis.
tetststtststststtstst
## Training Data
The model was trained on the [Kaggle Pneumonia dataset](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia), which contains thousands of labeled chest X-ray images from children.
## Model Architecture
The model uses MobileNetV3 as the base for feature extraction, with additional custom layers to tailor it for pneumonia detection.
## Training Procedure
The model was trained with an Adam optimizer and early stopping based on validation loss to prevent overfitting. Data augmentation techniques such as rotations and flips were used to enhance generalization.
## Performance
The model achieved a high accuracy on the validation set, with the following metrics:
- Accuracy: XX%
- Precision: XX%
- Recall: XX%
- F1 Score: XX%
## Usage
Here is an example of how to use this model:
```python
import gradio as gr
import tensorflow as tf
model = tf.keras.models.load_model('model.h5')
def predict(image):
processed_image = preprocess_image(image)
return model.predict(processed_image)
iface = gr.Interface(fn=predict, inputs="image", outputs="label")
iface.launch()
|
Eempostor/Harmonify
|
Eempostor
| 2024-04-18T12:08:35Z | 0 | 5 | null |
[
"region:us"
] | null | 2024-01-10T09:04:50Z |
# **Harmonify (RVC No UI Colab)**
https://colab.research.google.com/drive/1X8YR4Ruv7zzY8YAMPTfC7hkxqT_d4Q5d
## **Credits**
[Eempostor](https://discordapp.com/users/818050831034613771) - Made everything work together, Notebook creator
[Applio](https://github.com/IAHispano/Applio-RVC-Fork) by [IAHispano](https://github.com/IAHispano) - The repo this colab is based on
[CNChTu](https://github.com/CNChTu) - [FCPE](https://github.com/CNChTu/FCPE) F0 method
[So Vits SVC](https://github.com/svc-develop-team/so-vits-svc) - [FCPE](https://github.com/CNChTu/FCPE) F0 method script
[ChatGPT](https://chat.openai.com/) - Helper
[Phind](https://www.phind.com/) - Helper
If you have any suggestions or problems on this colab, dm [me](https://discordapp.com/users/818050831034613771) on discord.
## **Changelogs**
9/3/2024 | Huge changes
- Pitch extraction `fcpe` now uses the `torchfcpe` library. You can still use the previous version with `fcpe_legacy`
- `MIN_PITCH` and `MAX_PITCH` now accepts pitch notations
- Fixed error when running inference without a GPU (GPU is still recommended as its way faster and more stable)
- Fixed error when running `hybrid` with `pm`, `dio`, and `harvest`
- Overhaulled the directory structure and some code
14/1/2024 | Fixes
- Fixed `Cannot retrieve the public link of the file` issue while downloading models from google drive
10/1/2024 | Fixes
- Fixed `Cannot retrieve the public link of the file` issue on installation
12/12/2023 | Small adjustments
- Moved `DOWNLOAD` and `SAVE_TO_DRIVE` option on inference cell into a new cell
|
Litzy619/V0417MADP3
|
Litzy619
| 2024-04-18T11:59:43Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T22:26:06Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0417MADP3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MADP3
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.5507 | 0.09 | 10 | 3.0786 |
| 6.3727 | 0.18 | 20 | 2.6464 |
| 3.4656 | 0.27 | 30 | 1.9120 |
| 1.5044 | 0.36 | 40 | 1.1144 |
| 0.581 | 0.45 | 50 | 0.7389 |
| 0.3434 | 0.54 | 60 | 0.5960 |
| 0.3386 | 0.63 | 70 | 0.5215 |
| 0.2957 | 0.73 | 80 | 0.5323 |
| 0.258 | 0.82 | 90 | 0.4773 |
| 0.263 | 0.91 | 100 | 0.4986 |
| 0.2584 | 1.0 | 110 | 0.4831 |
| 0.2808 | 1.09 | 120 | 0.5051 |
| 0.2978 | 1.18 | 130 | 0.4790 |
| 0.2479 | 1.27 | 140 | 0.4456 |
| 0.4023 | 1.36 | 150 | 0.4223 |
| 0.21 | 1.45 | 160 | 0.2159 |
| 0.1788 | 1.54 | 170 | 0.2052 |
| 0.1786 | 1.63 | 180 | 0.2024 |
| 0.1748 | 1.72 | 190 | 0.2013 |
| 0.1718 | 1.81 | 200 | 0.2138 |
| 0.176 | 1.9 | 210 | 0.2197 |
| 0.173 | 1.99 | 220 | 0.2321 |
| 0.1877 | 2.08 | 230 | 0.2317 |
| 0.1732 | 2.18 | 240 | 0.2126 |
| 0.1661 | 2.27 | 250 | 0.1958 |
| 0.1668 | 2.36 | 260 | 0.1955 |
| 0.1642 | 2.45 | 270 | 0.1957 |
| 0.1612 | 2.54 | 280 | 0.1937 |
| 0.1681 | 2.63 | 290 | 0.1910 |
| 0.1622 | 2.72 | 300 | 0.1901 |
| 0.1592 | 2.81 | 310 | 0.1898 |
| 0.1657 | 2.9 | 320 | 0.1904 |
| 0.1696 | 2.99 | 330 | 0.1901 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
AGI-CEO/Taxi-v3_Q-Learning
|
AGI-CEO
| 2024-04-18T11:59:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-18T11:59:34Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AGI-CEO/Taxi-v3_Q-Learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Litzy619/V0417MAD1
|
Litzy619
| 2024-04-18T11:59:04Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T09:08:06Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0417MAD1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MAD1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4934 | 0.09 | 10 | 1.9339 |
| 1.5303 | 0.18 | 20 | 0.6622 |
| 0.3481 | 0.27 | 30 | 0.1084 |
| 0.1158 | 0.36 | 40 | 0.0888 |
| 0.092 | 0.45 | 50 | 0.0768 |
| 0.0876 | 0.54 | 60 | 0.0724 |
| 0.0811 | 0.63 | 70 | 0.0727 |
| 0.0778 | 0.73 | 80 | 0.0699 |
| 0.0798 | 0.82 | 90 | 0.0656 |
| 0.0783 | 0.91 | 100 | 0.0647 |
| 0.0754 | 1.0 | 110 | 0.0638 |
| 0.0668 | 1.09 | 120 | 0.0635 |
| 0.0663 | 1.18 | 130 | 0.0629 |
| 0.064 | 1.27 | 140 | 0.0635 |
| 0.0592 | 1.36 | 150 | 0.0626 |
| 0.0719 | 1.45 | 160 | 0.0626 |
| 0.064 | 1.54 | 170 | 0.0602 |
| 0.0669 | 1.63 | 180 | 0.0613 |
| 0.0617 | 1.72 | 190 | 0.0621 |
| 0.0669 | 1.81 | 200 | 0.0594 |
| 0.0572 | 1.9 | 210 | 0.0596 |
| 0.0588 | 1.99 | 220 | 0.0607 |
| 0.051 | 2.08 | 230 | 0.0612 |
| 0.0559 | 2.18 | 240 | 0.0602 |
| 0.0529 | 2.27 | 250 | 0.0597 |
| 0.054 | 2.36 | 260 | 0.0601 |
| 0.0535 | 2.45 | 270 | 0.0604 |
| 0.0511 | 2.54 | 280 | 0.0601 |
| 0.0486 | 2.63 | 290 | 0.0614 |
| 0.053 | 2.72 | 300 | 0.0611 |
| 0.0573 | 2.81 | 310 | 0.0614 |
| 0.0504 | 2.9 | 320 | 0.0614 |
| 0.0541 | 2.99 | 330 | 0.0615 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Qwen/Qwen1.5-MoE-A2.7B
|
Qwen
| 2024-04-18T11:58:22Z | 9,822 | 196 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"pretrained",
"moe",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T04:52:16Z |
---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
- moe
---
# Qwen1.5-MoE-A2.7B
## Introduction
Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieving comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
## Requirements
The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_moe'.
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
|
Resi/layfi-docvqa-v1
|
Resi
| 2024-04-18T11:57:36Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-18T11:57:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
paulelliotco/swin-tiny-patch4-window7-224_brain_tumour
|
paulelliotco
| 2024-04-18T11:56:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"health",
"braintumour",
"dataset:sartajbhuvaji/Brain-Tumor-Classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-04-18T08:29:26Z |
---
library_name: transformers
tags:
- health
- braintumour
datasets:
- sartajbhuvaji/Brain-Tumor-Classification
metrics:
- accuracy
---
|
Litzy619/V0417MADP1
|
Litzy619
| 2024-04-18T11:54:15Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T22:23:30Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0417MADP1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MADP1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.4601 | 0.09 | 10 | 3.0323 |
| 6.8643 | 0.18 | 20 | 2.8702 |
| 4.5679 | 0.27 | 30 | 2.3361 |
| 2.1248 | 0.36 | 40 | 1.5176 |
| 0.9282 | 0.45 | 50 | 0.9891 |
| 0.4911 | 0.54 | 60 | 0.7251 |
| 0.3523 | 0.63 | 70 | 0.5704 |
| 0.2758 | 0.73 | 80 | 0.4956 |
| 0.2531 | 0.82 | 90 | 0.4682 |
| 0.2596 | 0.91 | 100 | 0.4391 |
| 0.2475 | 1.0 | 110 | 0.4452 |
| 0.2484 | 1.09 | 120 | 0.4215 |
| 0.2508 | 1.18 | 130 | 0.4049 |
| 0.2237 | 1.27 | 140 | 0.3938 |
| 0.2173 | 1.36 | 150 | 0.3682 |
| 0.2077 | 1.45 | 160 | 0.3774 |
| 0.2233 | 1.54 | 170 | 0.3721 |
| 0.2241 | 1.63 | 180 | 0.3554 |
| 0.2178 | 1.72 | 190 | 0.3489 |
| 0.2096 | 1.81 | 200 | 0.3424 |
| 0.2137 | 1.9 | 210 | 0.3384 |
| 0.2084 | 1.99 | 220 | 0.3420 |
| 0.2157 | 2.08 | 230 | 0.3390 |
| 0.2052 | 2.18 | 240 | 0.3359 |
| 0.2017 | 2.27 | 250 | 0.3415 |
| 0.2115 | 2.36 | 260 | 0.3350 |
| 0.195 | 2.45 | 270 | 0.3316 |
| 0.2042 | 2.54 | 280 | 0.3244 |
| 0.2154 | 2.63 | 290 | 0.3287 |
| 0.1995 | 2.72 | 300 | 0.3258 |
| 0.1895 | 2.81 | 310 | 0.3022 |
| 0.207 | 2.9 | 320 | 0.3089 |
| 0.2038 | 2.99 | 330 | 0.3114 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
xiaoliy2/gemma-7b-it-ft-model-1
|
xiaoliy2
| 2024-04-18T11:53:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T11:53:00Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** xiaoliy2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sudhanshusinghaiml/facebook-bart-base-fintuned
|
sudhanshusinghaiml
| 2024-04-18T11:45:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-14T17:18:12Z |
---
license: apache-2.0
language:
- en
---
|
nilz1999/Llama-2-7b-single-label-ft
|
nilz1999
| 2024-04-18T11:43:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T11:43:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Litzy619/V0417MAD5
|
Litzy619
| 2024-04-18T11:42:18Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T22:15:08Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0417MAD5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MAD5
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4934 | 0.09 | 10 | 1.9339 |
| 1.5303 | 0.18 | 20 | 0.6622 |
| 0.3481 | 0.27 | 30 | 0.1084 |
| 0.1158 | 0.36 | 40 | 0.0888 |
| 0.092 | 0.45 | 50 | 0.0768 |
| 0.0876 | 0.54 | 60 | 0.0724 |
| 0.0811 | 0.63 | 70 | 0.0727 |
| 0.0778 | 0.73 | 80 | 0.0699 |
| 0.0798 | 0.82 | 90 | 0.0656 |
| 0.0783 | 0.91 | 100 | 0.0647 |
| 0.0754 | 1.0 | 110 | 0.0638 |
| 0.0668 | 1.09 | 120 | 0.0635 |
| 0.0663 | 1.18 | 130 | 0.0629 |
| 0.064 | 1.27 | 140 | 0.0635 |
| 0.0592 | 1.36 | 150 | 0.0626 |
| 0.0719 | 1.45 | 160 | 0.0626 |
| 0.064 | 1.54 | 170 | 0.0602 |
| 0.0669 | 1.63 | 180 | 0.0613 |
| 0.0617 | 1.72 | 190 | 0.0621 |
| 0.0669 | 1.81 | 200 | 0.0594 |
| 0.0572 | 1.9 | 210 | 0.0596 |
| 0.0588 | 1.99 | 220 | 0.0607 |
| 0.051 | 2.08 | 230 | 0.0612 |
| 0.0559 | 2.18 | 240 | 0.0602 |
| 0.0529 | 2.27 | 250 | 0.0597 |
| 0.054 | 2.36 | 260 | 0.0601 |
| 0.0535 | 2.45 | 270 | 0.0604 |
| 0.0511 | 2.54 | 280 | 0.0601 |
| 0.0486 | 2.63 | 290 | 0.0614 |
| 0.053 | 2.72 | 300 | 0.0611 |
| 0.0573 | 2.81 | 310 | 0.0614 |
| 0.0504 | 2.9 | 320 | 0.0614 |
| 0.0541 | 2.99 | 330 | 0.0615 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
SBB/eynollah-binarization
|
SBB
| 2024-04-18T11:34:25Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2023-07-18T17:08:05Z |
---
license: apache-2.0
---
# Model Card for eynollah-binarization
<!-- Provide a quick summary of what the model is/does. [Optional] -->
This model is part of a suite of 13 models. The suite introduces an end-to-end pipeline to extract layout, text lines and reading order for historic documents, where the output can be used as an input for OCR engines.
Questions and comments about the models can be directed to Vahid Rezanezhad at [email protected].
# Table of Contents
- [Model Card for
eynollah-binarization](#model-card-for-
eynollah-binarization)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors and Metrics](#testing-data-factors-and-metrics)
- [Testing Data](#testing-data)
- [Metrics](#metrics)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Software](#software)
- [Citation](#citation)
- [More Information](#more-information)
- [Model Card Authors](#model-card-authors)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
This suite of 13 models presents a document layout analysis (DLA) system for historical documents implemented by pixel-wise segmentation using convolutional neural networks. In addition, heuristic methods are applied to detect marginals and to determine the reading order of text regions.
The detection and classification of multiple classes of layout elements such as headings, images, tables etc. as part of DLA is required in order to extract and process them in subsequent steps. Altogether, the combination of image detection, classification and segmentation on the wide variety that can be found in over 400 years of printed cultural heritage makes this a very challenging task. Deep learning models are complemented with heuristics for the detection of text lines, marginals, and reading order. Furthermore, an optional image enhancement step was added in case of documents that either have insufficient pixel density and/or require scaling. Also, a column classifier for the analysis of multi-column documents was added. With these additions, DLA performance was improved, and a high accuracy in the prediction of the reading order is accomplished.
Two Arabic/Persian terms form the name of the model suite: عين الله, which can be transcribed as "ain'allah" or "eynollah"; it translates into English as "God's Eye" -- it sees (nearly) everything on the document image.
- **Developed by:** [Vahid Rezanezhad]([email protected])
- **Shared by:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
- **Model type:** Neural Network
- **Language(s) (NLP):** Irrelevant; works on all languages
- **License:** apache-2.0
- **Resources for more information:**
- The GitHub Repo can be found [here](https://github.com/qurator-spk/eynollah)
- Associated Paper: [Document Layout Analysis with Deep Learning and Heuristics](https://doi.org/10.1145/3604951.3605513)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of the suite is performing document layout analysis (DLA) on image data. The system returns the results in [PAGE-XML format](https://github.com/PRImA-Research-Lab/PAGE-XML).
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The system performs document layout analysis in a series of steps. First, the image is cropped and the number of columns determined. Then the pixels-per-inch (ppi) rate is detected, and when ppi is below 300, the image is re-scaled and enhanced. Now the main region types (text regions, images, separators, background) are detected as the early layout. Marginals are detected with a heuristic method, then -- optionally -- headers (or headings or floatings) and drop capitals. Next, textline segmentation is performed for text regions, and for each text region the slope of deskewing is calculated. Heuristics are used to determine bounding boxes (or contours in the case of curved lines) of textlines in each region after deskewing. After that, the reading order of text regions is detected based on separators, headers (or headings or floatings) and the coordinates of columns. Finally, all results are written to a PAGE-XML file.
**Within the suite, the model *eynollah-binarization/2021-04-25/saved_model.pb* is used for the task of binarization**.
This model is designed to tackle the intricate task of document image binarization, which involves segmentation of the image into white and black pixels. This process significantly contributes to the overall performance of the layout models, particularly in scenarios where the documents are degraded or exhibit subpar quality. The robust binarization capability of the model enables improved accuracy and reliability in subsequent layout analysis, thereby facilitating enhanced document understanding and interpretation.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The intended use of this suite of 13 models is conceived of as a system. Comparable systems are [dhSegment](https://doi.org/10.1109/ICFHR-2018.2018.00011) and [P2PaLA](https://github.com/lquirosd/P2PaLA). However -- and as always with a system -- , every component can potentially used on its own. Each model might therefore be used or trained for other downstream purposes.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
This model suite does **NOT** perform any Optical Character Recognition (OCR), it is an image-to-PAGE-XML system only.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The pre-processing of digitised historical and contemporary texts is a task contributing to knowledge creation, both by developing models facilitating the necessary pre-processing steps of document layout analysis and ultimately by enabling better discoverability of information in the processed works. Since the content of the document images is not touched, ethical challenges cannot be identified. The endeavor of developing the model was not undertaken for profit. Though a commercial product based on this model may be developed in the future, it will always remain openly accessible without any commercial interest. The aim of the development of this model was to improve document layout analysis, an endeavour that is not for profit. As a technical limitation, it has to be noted that there is a lot of performance to gain for historical text by adding more historical Ground Truth data.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
The application of machine learning models to convert a document image into PAGE-XML is a process which can still be improved. The suite of 13 models performs pixel-wise segmentation, which is done in patches; it therefore lacks a global understanding of documents and makes the document layout analysis system unable to detect some document subcategories like page numbers. The end-to-end system with different stages uses the output of each task for the next step. Therefore a poor prediction in early steps may cause a poor final document information extraction. Patch-wise segmentation can cause problems to segment pixels between text blocks, large scale drop capitals, headers and tables, since in each patch the model sees only a part of the document element and not all of it. Therefore, there is a lot to gain by improving the existing model suite.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
For model training, Ground Truth in PAGE-XML format was sourced mainly from three datasets. The [IMPACT dataset of historical document images](https://doi.org/10.1145/2501115.2501130) contains a representative sample of historical books and newspapers from Europe’s major libraries. Due to restrictions, only the open data from the National Library of the Netherlands (KB) and the Poznan Supercomputing and Networking Center (PSNC) were used. The [Europeana Newspapers Project (ENP) image and ground truth dataset of historical newspapers](https://doi.org/10.1109/ICDAR.2015.7333898) contains images representative of the newspaper digitisation projects of 12 national and major European libraries. The [OCR-D dataset](https://doi.org/10.1145/3322905.3322916) of German prints between 1500 and 1900 is based on a selection from the holdings of the “German Text Archive” (DTA), the Digitized Collections of the Berlin State Library and the Wolfenbüttel Digital Library.
## Training Procedure
All models, except for the column classifier, are designed for pixelwise segmentation. The training of these models follows a patchwise approach, wherein the documents are divided into smaller patches and fed into the models during training. To train these segmentation models, annotated labels are required. Each sub-element in the document is assigned a unique value for identification.
If you consider examining the training process, take a look at the repository which contains the source code for training an encoder model for document image segmentation [on GitHub](https://github.com/qurator-spk/sbb_pixelwise_segmentation).
### Preprocessing
In order to use this suite of models for historical documents, no preprocessing is needed for the input images.
### Speeds, Sizes, Times
The duration of training per epoch varies, typically lasting between 2 to 5 hours, depending on the specific use case, the datasets used, and the extent of applied data augmentation.
Our ResNet-50-U-Net model has 38.15M parameters where only parameters of the decoder part are fully trained (14.71M parameters).
In the case of the column classifier, we used a ResNet-50 with two dense connected layers at the top. The column classifier model has 25.6M parameters, where only parameters of dense layers are fully trained (2.16M parameters).
### Training hyperparameters
Within the context of a constant model architecture, our hyperparameters encompass diverse augmentations, each characterized by its unique set of parameters. In addition to these, the model training process involves other hyperparameters, including the choice of the loss function, the number of batches utilized, the patch size of input images, and the number of epochs.
### Training results
Training results are reported in [this paper](https://doi.org/10.1145/3604951.3605513).
# Evaluation
Given the inadequacy of the prevailing segmentation metric in achieving optimal outcomes for document segmentation, particularly with respect to isolating regions as effectively as desired, we proceeded to evaluate our model using smaller batches of the three above-named datasets used for training. In pursuit of improved results, we employed an ensemble learning approach by aggregating the best epoch weights.
## Testing Data, Factors and Metrics
### Testing Data
Three new datasets were used for evaluation to ensure an unbiased comparison. [The PRImA Layout Analysis Dataset](https://www.primaresearch.org/datasets/Layout_Analysis) contains 478 pages of realistic documents, reflecting various challenges in layout analysis.
[The German-Brazilian Newspapers (GBN) Dataset](https://web.inf.ufpr.br/vri/databases/gbn/) comprises 152 pages from historical newspapers. We used only 61 pages to keep the complexity of the documents similar. Finally, the (unreleased) Vlaamse Erfgoedbibliotheken (VEB) dataset comprises ground truth for 75 pages from historical Belgian newspapers, split across three sets.
### Metrics
In addition to utilizing conventional performance metrics such as mean Intersection over Union (mIoU), precision, recall, and F1-score, we have incorporated the [Prima layout evaluation](https://www.primaresearch.org/tools/PerformanceEvaluation) metrics, namely Merge, Split, Miss, and the overall success rate.
# Model Examination
Examination results are reported in [this paper](https://doi.org/10.1145/3604951.3605513).
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** GeForce RTX 2070
- **Hours used:** 2 to 5 hours per epoch
- **Cloud Provider:** No cloud.
- **Compute Region:** Germany.
- **Carbon Emitted:** More information needed.
# Technical Specifications
## Model Architecture and Objective
See [publication](https://doi.org/10.1145/3604951.3605513).
### Software
See the code published on [GitHub](https://github.com/qurator-spk/eynollah).
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{10.1145/3604951.3605513,
author = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens},
title = {Document Layout Analysis with Deep Learning and Heuristics},
year = {2023},
isbn = {9798400708411},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3604951.3605513},
doi = {10.1145/3604951.3605513},
abstract = {The automated yet highly accurate layout analysis (segmentation) of historical document images remains a key challenge for the improvement of Optical Character Recognition (OCR) results. But historical documents exhibit a wide array of features that disturb layout analysis, such as multiple columns, drop capitals and illustrations, skewed or curved text lines, noise, annotations, etc. We present a document layout analysis (DLA) system for historical documents implemented by pixel-wise segmentation using convolutional neural networks. In addition, heuristic methods are applied to detect marginals and to determine the reading order of text regions. Our system can detect more layout classes (e.g. initials, marginals) and achieves higher accuracy than competitive approaches. We describe the algorithm, the different models and how they were trained and discuss our results in comparison to the state-of-the-art on the basis of three historical document datasets.},
booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing},
pages = {73–78},
numpages = {6},
keywords = {Document layout analysis, Reading order detection, Segmentation},
location = {San Jose, CA, USA},
series = {HIP '23}
}
```
**APA:**
(Rezanezhad et al., 2023)
# More Information
SHA256 and MD5 hashes for the model *eynollah-binarization/2021-04-25/saved_model.pb*:
SHA256: 18dc9879828a42d8f12845f6026d4835acf7ac70f82abda68ad3a5cc17b9e44a
MD5: 0544acbce4a19868a5a9d62c284a648a
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
[Vahid Rezanezhad]([email protected]), [Clemens Neudecker]([email protected]) and [Jörg Lehmann]([email protected])
# Model Card Contact
Questions and comments about the model can be directed to [Vahid Rezanezhad]([email protected]), questions and comments about the model card can be directed to [Jörg Lehmann]([email protected]).
# How to Get Started with the Model
How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/eynollah).
Model Card as of August 17th, 2023
|
TeamResearch/sentiment-model-saagie
|
TeamResearch
| 2024-04-18T11:25:03Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T10:10:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-saagie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-saagie
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5475
- Accuracy: 0.7933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5168 | 1.0 | 1500 | 0.4738 | 0.7733 |
| 0.3809 | 2.0 | 3000 | 0.5253 | 0.7917 |
| 0.3372 | 3.0 | 4500 | 0.5475 | 0.7933 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.8.1
- Datasets 2.12.0
- Tokenizers 0.12.1
|
anrhi/mobile_v2__fake_image_M_detection
|
anrhi
| 2024-04-18T11:14:19Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2024-04-18T11:13:46Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
comet24082002/finetuned_bge_ver19
|
comet24082002
| 2024-04-18T11:12:18Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-04-18T08:06:18Z |
---
license: mit
base_model: BAAI/bge-m3
tags:
- generated_from_trainer
model-index:
- name: finetuned_bge_ver19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bge_ver19
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
EinsZwo/nlid_mlm_supertag050-fullset-sanitysaveaftertrain
|
EinsZwo
| 2024-04-18T11:08:51Z | 159 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-18T11:08:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chirbard/dqn-SpaceInvadersNoFrameskip-v4
|
chirbard
| 2024-04-18T11:06:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-18T08:30:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 512.50 +/- 269.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chirbard -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chirbard -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga chirbard
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
rhaymison/Mistral-portuguese-luana-7b-mental-health
|
rhaymison
| 2024-04-18T11:04:31Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"health",
"portuguese",
"conversational",
"pt",
"dataset:rhaymison/mental-health-qa",
"base_model:rhaymison/Mistral-portuguese-luana-7b",
"base_model:finetune:rhaymison/Mistral-portuguese-luana-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-04T17:10:00Z |
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- health
- portuguese
base_model: rhaymison/Mistral-portuguese-luana-7b
datasets:
- rhaymison/mental-health-qa
pipeline_tag: text-generation
model-index:
- name: Mistral-portuguese-luana-7b-mental-health
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 60.53
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 48.26
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 38.04
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 91.3
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 74.98
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 60.57
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 76.86
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 70.05
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 64.9
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-mental-health
name: Open Portuguese LLM Leaderboard
---
# Mistral-portuguese-luana-7b-mental-health
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/luana-hel.jpeg" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Luana Mental health is a tuned model of the Luana-7b based on the Mistral 7b architecture.
The model was adjusted to address topics such as depression, problems at work, mental health, problems with studies, drugs and others.
# How to use
You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
Important points like these help models (even smaller models like 7b) to perform much better.
### FULL MODEL : A100
### HALF MODEL: L4
### 8bit or 4bit : T4 or V100
```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-mental-health", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-mental-health")
model.eval()
```
You can use with Pipeline but in this example i will use such as Streaming
```python
inputs = tokenizer([f"""<s>Aja como um conselheira de pessoas que estão passandos por dificuldades.
Responda sempre de maneira educada e procure ajudar as pessoas.
[INST] Olá, estou tendo muitos problemas pessoais, briga com meus pais. Tenho tido problemas no trabalho. Preciso de ajuda[/INST]"""], return_tensors="pt")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=200)
#Olá, estou aqui para ajudá-lo. É normal enfrentar problemas pessoais, mas é importante buscar ajuda.
#Vamos conversar sobre suas brigas com seus pais e como podemos resolver isso. Além disso, vamos falar sobre seus problemas no trabalho e como podemos encontrar
#soluções para isso. Você não está sozinho, estou aqui para ajudá-lo. Vamos começar a encontrar maneiras de lidar com essas questões.
#Você gostaria de começar falando sobre sua situação com seus pais?
#Ou talvez queremos começar falando sobre seus problemas no trabalho? O que você acha melhor? Você pode me contar mais sobre isso?
#Estou aqui para ajudá-lo.
```
# LangChain
<div style="background-color:#f0f0f0; padding:10px; border-radius:10px; font-family: 'Arial';">
<p><strong><span style="color:#3366cc;">Human:</span></strong> Ola Luana, estou passando por um momento dificil. Estou precisando de ajuda</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Olá, estou aqui para te ajudar. Você gostaria de falar sobre o que está te incomodando?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Tenho tido problemas em casa, em especial com minha esposa. Temos brigado muito, estou com medo do meu
casamento terminar.</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> É normal ter problemas no casamento, mas é importante conversar com sua esposa sobre isso.
Você já tentou ter uma conversa calma e honesta com ela?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Já sim, ela me entende, mas sempre terminamos em brigas.
Eu sou uma pessoa muito stressada por causa do trabalho. Acabo descontando isso nela.</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> É importante encontrar maneiras saudáveis de lidar com o estresse.
Você já tentou praticar atividades de relaxamento, como meditação ou ioga?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> Ja tentei sim. Eu pratico corrida e jogo vide-game pra acalmar, mas nao tem ajudado.
Será que uma terapida de casal poderia ajudar?</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Uma terapeuta de casal pode ser uma ótima opção para ajudar a melhorar a comunicação e resolver
problemas no casamento. Você já considerou isso?</p>
<p><strong><span style="color:#3366cc;">Human:</span></strong> É uma boa opção. Voce poderia me indicar um modo de conversa com ela sobre a terapida de casal?
Como apresentar essa ideia pra ela.</p>
<p><strong><span style="color:#ff6600;">Luana:</span></strong> Sim, é importante abordar a ideia da terapia de casal de maneira aberta e honesta.
Você pode começar dizendo que está com medo do seu casamento terminar e que gostaria de tentar algo para melhorar a situação. Também pode ser útil falar sobre os benefícios da terapia de casal, como melhorar a comunicação e encontrar soluções para problemas. Lembre-se de ser paciente e respeitoso, mesmo que ela não esteja disposta a aceitar a ideia inicialmente. A terapia de casal pode ser uma ferramenta poderosa para melhorar o relacionamento, mas é importante que ambos estejam dispostos a dar o primeiro passo.</p>
</div>
# 4bits
```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map={"": 0}
)
```
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/Mistral-portuguese-luana-7b-mental-health) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**65.05**|
|ENEM Challenge (No Images)| 60.53|
|BLUEX (No Images) | 48.26|
|OAB Exams | 38.04|
|Assin2 RTE | 91.30|
|Assin2 STS | 74.98|
|FaQuAD NLI | 60.57|
|HateBR Binary | 76.86|
|PT Hate Speech Binary | 70.05|
|tweetSentBR | 64.90|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/heleno-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
|
jin-cheon/dummy-model
|
jin-cheon
| 2024-04-18T11:02:27Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-04-18T11:01:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Haimee/mistral_envs_claim
|
Haimee
| 2024-04-18T11:01:38Z | 13 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2024-04-16T11:53:31Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: mistral_envs_claim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_envs_claim
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.0a0+29c30b1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
adriansanz/test_new
|
adriansanz
| 2024-04-18T11:01:26Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-04-18T09:17:36Z |
---
pipeline_tag: zero-shot-classification
---
|
anrhi/mobile_v2_new_fake_image_detection
|
anrhi
| 2024-04-18T11:01:25Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2024-04-18T11:01:07Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
Litzy619/V0417MAD3
|
Litzy619
| 2024-04-18T10:57:48Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-17T10:17:30Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0417MAD3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0417MAD3
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3013 | 0.09 | 10 | 1.4269 |
| 0.9114 | 0.18 | 20 | 0.1385 |
| 0.1257 | 0.27 | 30 | 0.0956 |
| 0.1051 | 0.36 | 40 | 0.0844 |
| 0.0877 | 0.45 | 50 | 0.0801 |
| 0.0904 | 0.54 | 60 | 0.0736 |
| 0.082 | 0.63 | 70 | 0.0701 |
| 0.0761 | 0.73 | 80 | 0.0710 |
| 0.0857 | 0.82 | 90 | 0.0672 |
| 0.0789 | 0.91 | 100 | 0.0659 |
| 0.0775 | 1.0 | 110 | 0.0687 |
| 0.0716 | 1.09 | 120 | 0.0669 |
| 0.0698 | 1.18 | 130 | 0.0676 |
| 0.0731 | 1.27 | 140 | 0.0646 |
| 0.0675 | 1.36 | 150 | 0.0665 |
| 0.0773 | 1.45 | 160 | 0.0676 |
| 0.0705 | 1.54 | 170 | 0.0659 |
| 0.0771 | 1.63 | 180 | 0.0627 |
| 0.2895 | 1.72 | 190 | 0.0898 |
| 0.0982 | 1.81 | 200 | 0.0753 |
| 0.0782 | 1.9 | 210 | 0.0711 |
| 0.0721 | 1.99 | 220 | 0.0690 |
| 0.0652 | 2.08 | 230 | 0.0677 |
| 0.0693 | 2.18 | 240 | 0.0654 |
| 0.0661 | 2.27 | 250 | 0.0646 |
| 0.0685 | 2.36 | 260 | 0.0643 |
| 0.0665 | 2.45 | 270 | 0.0641 |
| 0.0629 | 2.54 | 280 | 0.0639 |
| 0.0588 | 2.63 | 290 | 0.0642 |
| 0.0645 | 2.72 | 300 | 0.0639 |
| 0.0675 | 2.81 | 310 | 0.0636 |
| 0.061 | 2.9 | 320 | 0.0635 |
| 0.067 | 2.99 | 330 | 0.0635 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
doxgxxn/gemma_prompt_recovery2
|
doxgxxn
| 2024-04-18T10:50:02Z | 159 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T10:47:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spsither/mms_300_v1.630
|
spsither
| 2024-04-18T10:31:50Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-18T10:29:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mohits01/phi-2-finetuned-intentOnly
|
mohits01
| 2024-04-18T10:31:46Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-18T07:59:25Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-intentOnly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-intentOnly
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
michaelw37/sc30
|
michaelw37
| 2024-04-18T10:31:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-13T15:15:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaszki/stablelm-38
|
tomaszki
| 2024-04-18T10:26:36Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T10:24:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dudu1021/my_doccls_korean_model
|
dudu1021
| 2024-04-18T10:24:04Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-15T04:41:35Z |
---
license: apache-2.0
base_model: beomi/kcbert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_doccls_korean_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_doccls_korean_model
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5416
- Accuracy: 0.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5834 | 1.0 | 37500 | 0.5604 | 0.8323 |
| 0.4454 | 2.0 | 75000 | 0.4939 | 0.8739 |
| 0.3675 | 3.0 | 112500 | 0.5165 | 0.8819 |
| 0.3438 | 4.0 | 150000 | 0.5416 | 0.8856 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ppoyaa/Lumina-3.5
|
Ppoyaa
| 2024-04-18T10:19:11Z | 2,786 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-16T15:54:59Z |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
model-index:
- name: Lumina-3.5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.66
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ppoyaa/Lumina-3.5
name: Open LLM Leaderboard
---
# Lumina-3.5
Lumina-3.5 is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) with improved reasoning from [Lumina-3](https://huggingface.co/Ppoyaa/Lumina-3). This model uses a context window of up to 32k.
# Response Examples
``` Math
Input: 25 - 4 * 2 + 3 = ?
------------------------------------
Output: To solve this equation, we will follow the order of operations (PEMDAS/BODMAS). This means performing any multiplications and divisions first from left to right, and then additions and subtractions from left to right.
The given equation is: 25 - 4 * 2 + 3
First, let's perform the multiplication:
4 * 2 = 8
Now, our equation looks like: 25 - 8 + 3
Next, perform the subtraction:
25 - 8 = 17
Finally, do the addition:
17 + 3 = 20
So, the result of the equation 25 - 4 * 2 + 3 is 20.
```
``` Reasoning
Input: Jane is faster than Joe. Joe is faster than Sam. Is Sam faster than Jane? Explain your reasoning step by step.
------------------------------------
Output: To answer this question, let's analyze the given information step by step:
We are provided with a comparison between three individuals - Jane, Joe, and Sam - in terms of their relative speeds. The statements tell us the following about their ranks:
Jane > Joe (Jane is faster than Joe)
Joe > Sam (Joe is faster than Sam)
Now we need to determine if the opposite relationship holds true: whether Sam is faster than Jane or not. Based on the given comparisons, we know that both Jane and Joe have faster speeds compared to Sam. There isn't any piece of information suggesting otherwise.
Since the original premise established an ordered ranking among these individuals (Jane > Joe > Sam), and the direct opposite of the relationship between Sam and the other two is not present, we cannot conclude that Sam is faster than Jane.
Thus, based on the provided information, our reasoning confirms that Sam is not faster than Jane.
```
## 🏆 Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.41|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.82|
|MMLU (5-Shot) |64.48|
|TruthfulQA (0-shot) |75.66|
|Winogrande (5-shot) |83.98|
|GSM8k (5-shot) |67.93|
# Quants
Special thanks to GGUFs made by [mradermacher](https://huggingface.co/mradermacher)
* [mradermacher/Lumina-3.5-GGUF](https://huggingface.co/mradermacher/Lumina-3.5-GGUF)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-3.5"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Ppoyaa__Lumina-3.5)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.41|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.82|
|MMLU (5-Shot) |64.48|
|TruthfulQA (0-shot) |75.66|
|Winogrande (5-shot) |83.98|
|GSM8k (5-shot) |67.93|
|
abdullahfurquan/mistral-1713422427
|
abdullahfurquan
| 2024-04-18T10:15:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T07:42:28Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
datasets:
- generator
model-index:
- name: mistral-1713422427
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-1713422427
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8646 | 0.17 | 1 | 1.7810 |
| 1.7688 | 0.33 | 2 | 1.7576 |
| 1.8047 | 0.5 | 3 | 1.7373 |
| 1.6987 | 0.67 | 4 | 1.7224 |
| 1.7796 | 0.83 | 5 | 1.7142 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Shakhovak/flan-t5-large-absa-laptops
|
Shakhovak
| 2024-04-18T10:11:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-10T08:05:12Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large-absa-laptops
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-absa-laptops
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3509 | 0.9 | 200 | 0.1762 |
| 0.2241 | 1.8 | 400 | 0.1329 |
| 0.1543 | 2.7 | 600 | 0.1224 |
| 0.1107 | 3.6 | 800 | 0.1294 |
| 0.0877 | 4.5 | 1000 | 0.1474 |
| 0.0697 | 5.41 | 1200 | 0.1444 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
FriendliAI/Mixtral-8x7B-Instruct-v0.1-fp8
|
FriendliAI
| 2024-04-18T10:10:37Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"pretrained",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2024-04-17T02:41:39Z |
---
license: apache-2.0
base_model: mistralai/Mixtral-8X7B-Instruct-v0.1
inference: false
model_link: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
model_name: mistralai/Mixtral-8x7B-Instruct-v0.1
pipeline_tag: text-generation
quantized_by: FriendliAI
tags:
- pretrained
---
<!-- header start -->
<p align="center">
<img src="https://i.imgur.com/mNM6Cai.png" width="100%" alt="Friendli Logo">
</p>
<!-- header end -->
# Mixtral-8x7B-Instruct-v0.1 - FP8
- Model creator: [Mistral AI](https://huggingface.co/mistralai)
- Original model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## Description
This repo contains the Mixtral-8x7B-Instruct-v0.1 model quantized to FP8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy.
Note that FP8 is only supported by NVIDIA Ada, Hopper, and Blackwell GPU architectures.
Check out [FriendliAI documentation](https://docs.friendli.ai/) for more details.
## Compatibility
This model is compatible with **[Friendli Container](https://friendli.ai/products/container/)**.
## Prerequisites
- Before you begin, make sure you have signed up for [Friendli Suite](https://suite.friendli.ai/). **You can use Friendli Containers free of charge for four weeks.**
- Prepare a Personal Access Token following [this guide](#preparing-personal-access-token).
- Prepare a Friendli Container Secret following [this guide](#preparing-container-secret).
### Preparing Personal Access Token
PAT (Personal Access Token) is the user credential for for logging into our container registry.
1. Sign in [Friendli Suite](https://suite.friendli.ai/).
2. Go to **[User Settings > Tokens](https://suite.friendli.ai/user-settings/tokens)** and click **'Create new token'**.
3. Save your created token value.
### Pulling Friendli Container Image
1. Log in to the Docker client using the personal access token created as outlined in [this guide](#preparing-personal-access-token).
```sh
export FRIENDLI_PAT="YOUR PAT"
docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT
```
2. Pull image
```sh
docker pull registry.friendli.ai/trial
```
## Running Friendli Container
Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint.
```sh
docker run \
--gpus '"device=0"' \
-p 8000:8000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \
registry.friendli.ai/trial \
--web-server-port 8000 \
--hf-model-name FriendliAI/Mixtral-8x7B-Instruct-v0.1-fp8
```
### Optimizing Inference Performance with Policy Search
To serve MoE models efficiently, it is required to run a policy search to explore the optimal execution policy:
```sh
export POLICY_DIR=$PWD/policy
mkdir -p $POLICY_DIR
docker run \
--gpus '"device=0"' \
-p 8000:8000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-v $POLICY_DIR:/policy \
-e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \
registry.friendli.ai/trial \
--web-server-port 8000 \
--hf-model-name FriendliAI/Mixtral-8x7B-Instruct-v0.1-fp8 \
--algo-policy-dir /policy \
--search-policy true
```
When the optimal policy is successfully searched, the policy is compiled into a policy file and saved at `$POLICY_DIR`.
Now you can create an inference endpoint with this optimal policy as follows:
```sh
docker run \
--gpus '"device=0"' \
-p 8000:8000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-v $POLICY_DIR:/policy \
-e FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" \
registry.friendli.ai/trial \
--web-server-port 8000 \
--hf-model-name FriendliAI/Mixtral-8x7B-Instruct-v0.1-fp8 \
--algo-policy-dir /policy
```
---
# Original model card: MistralAI's Mixtr-8x7B Instruct v0.1
# Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
text = "Hello my name is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
Baprick/save
|
Baprick
| 2024-04-18T10:08:45Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T10:03:43Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: save
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# save
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
shrenikb/fedglobaltest1
|
shrenikb
| 2024-04-18T10:06:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T00:17:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shrenikb/fed75sparsitytest1
|
shrenikb
| 2024-04-18T10:05:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T00:16:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shrenikb/fed25sparsitytest1
|
shrenikb
| 2024-04-18T10:05:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T00:17:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ClementRomac/llm_gtl_nbr_env_32_Flan_T5large_6-actions
|
ClementRomac
| 2024-04-18T10:03:39Z | 0 | 0 | null |
[
"arxiv:2302.02662",
"license:mit",
"region:us"
] | null | 2024-04-18T08:08:46Z |
---
license: mit
---
Flan-T5 large finetuned with [GLAM](https://sites.google.com/view/grounding-llms-with-online-rl/) on BabyAI-Text GoToLocal task.
Paper: arxiv.org/abs/2302.02662
|
ClementRomac/llm_gtl_nbr_env_32_Flan_T5small_6-actions
|
ClementRomac
| 2024-04-18T10:03:15Z | 0 | 0 | null |
[
"arxiv:2302.02662",
"license:mit",
"region:us"
] | null | 2024-04-18T09:24:40Z |
---
license: mit
---
Flan-T5 small finetuned with [GLAM](https://sites.google.com/view/grounding-llms-with-online-rl/) on BabyAI-Text GoToLocal task.
Paper: arxiv.org/abs/2302.02662
|
1998Shubham007/ModelRecomm
|
1998Shubham007
| 2024-04-18T10:00:49Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-04-18T10:00:43Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
BrianLiu/distilgpt2-finetuned-wikitext2
|
BrianLiu
| 2024-04-18T09:58:23Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T06:42:44Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_keras_callback
model-index:
- name: BrianLiu/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BrianLiu/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8578
- Validation Loss: 3.6751
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8578 | 3.6751 | 0 |
### Framework versions
- Transformers 4.39.3
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
saketag73/classification_facebook_wav2vec2-base-finetuned-gtzan-1
|
saketag73
| 2024-04-18T09:57:10Z | 159 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-04-18T09:56:53Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
metrics:
- name: Accuracy
type: accuracy
value: 0.76
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1430
- Accuracy: 0.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2574 | 1.0 | 25 | 2.1793 | 0.445 |
| 1.9361 | 2.0 | 50 | 1.8937 | 0.475 |
| 1.7211 | 3.0 | 75 | 1.7034 | 0.54 |
| 1.5003 | 4.0 | 100 | 1.5038 | 0.63 |
| 1.3653 | 5.0 | 125 | 1.3770 | 0.7 |
| 1.2614 | 6.0 | 150 | 1.3169 | 0.69 |
| 1.1654 | 7.0 | 175 | 1.2444 | 0.725 |
| 1.0837 | 8.0 | 200 | 1.1828 | 0.755 |
| 1.0409 | 9.0 | 225 | 1.1549 | 0.755 |
| 1.0147 | 10.0 | 250 | 1.1430 | 0.76 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
cstr/Spaetzle-v8-7b
|
cstr
| 2024-04-18T09:56:43Z | 57 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/NeuDist-Ro-7B",
"johannhartmann/Brezn3",
"ResplendentAI/Flora_DPO_7B",
"conversational",
"de",
"en",
"base_model:ResplendentAI/Flora_DPO_7B",
"base_model:merge:ResplendentAI/Flora_DPO_7B",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:merge:flemmingmiguel/NeuDist-Ro-7B",
"base_model:johannhartmann/Brezn3",
"base_model:merge:johannhartmann/Brezn3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-10T18:50:38Z |
---
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
base_model:
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
language:
- de
- en
---
# Spaetzle-v8-7b
This model is supposed to show adequate performance in German and English on a number of tasks, while mostly behaving well, that is, without rambling on, intermixing tokens from different templates in training and adapting, etc.
It is mostly a quick test, and considerably weaker in German grammar and orthography than DiscoLM e.g., but for use cases where this is not too important, but e.g. instruction following, reasoning, etc, it might actually be a little bit preferable.
It is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [johannhartmann/Brezn3](https://huggingface.co/johannhartmann/Brezn3)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
* on the basis of [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
All credits are due to the creators of those original models and the training datasets involved.
For a suitable quantized version, try [cstr/Spaetzle-v8-7b-GGUF](https://huggingface.co/cstr/Spaetzle-v8-7b-GGUF)
## Evaluation
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__Spaetzle-v8-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.27|
|AI2 Reasoning Challenge (25-Shot)|68.69|
|HellaSwag (10-Shot) |86.68|
|MMLU (5-Shot) |64.60|
|TruthfulQA (0-shot) |64.05|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |68.16|
EQ-Bench (v2_de): 61.04 / english (v2): 78.3
[ScandEval](https://scandeval.com/german-nlg/) 12.5.2 scores
| Benchmark | Spaetzle-v8-7b Value |
|-----------------------|----------------------------------------------------|
| Model ID | cstr/Spaetzle-v8-7b (few-shot, val) |
| Parameters | 7242 |
| Vocabulary Size | 32 |
| Context | 32768 |
| Commercial | False |
| Speed | 5,980 ± 1,031 / 1,714 ± 552 |
| Rank | 1.85 |
| GermEval | 58.90 ± 2.30 / 45.55 ± 3.30 |
| SB10k | 61.34 ± 1.90 / 72.98 ± 1.30 |
| ScaLA-De | 31.58 ± 4.39 / 65.51 ± 2.23 |
| GermanQuAD | 24.91 ± 3.98 / 60.88 ± 3.31 |
| MLSum | 67.25 ± 1.06 / 22.95 ± 2.64 |
| MMLU-De | 34.62 ± 2.20 / 50.43 ± 1.52 |
| HellaSwag-De | 48.70 ± 2.47 / 61.05 ± 1.79 |
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Spaetzle-v8-7b](https://huggingface.co/cstr/Spaetzle-v8-7b)| 45.31| 75.69| 63.94| 45.57| 57.63|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |25.59|± | 2.74|
| | |acc_norm|24.80|± | 2.72|
|agieval_logiqa_en | 0|acc |39.63|± | 1.92|
| | |acc_norm|39.78|± | 1.92|
|agieval_lsat_ar | 0|acc |23.48|± | 2.80|
| | |acc_norm|24.35|± | 2.84|
|agieval_lsat_lr | 0|acc |50.98|± | 2.22|
| | |acc_norm|51.96|± | 2.21|
|agieval_lsat_rc | 0|acc |62.08|± | 2.96|
| | |acc_norm|62.83|± | 2.95|
|agieval_sat_en | 0|acc |78.64|± | 2.86|
| | |acc_norm|79.13|± | 2.84|
|agieval_sat_en_without_passage| 0|acc |44.66|± | 3.47|
| | |acc_norm|44.66|± | 3.47|
|agieval_sat_math | 0|acc |37.27|± | 3.27|
| | |acc_norm|35.00|± | 3.22|
Average: 45.31%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |63.14|± | 1.41|
| | |acc_norm|64.51|± | 1.40|
|arc_easy | 0|acc |85.98|± | 0.71|
| | |acc_norm|82.49|± | 0.78|
|boolq | 1|acc |88.10|± | 0.57|
|hellaswag | 0|acc |66.31|± | 0.47|
| | |acc_norm|85.17|± | 0.35|
|openbookqa | 0|acc |38.00|± | 2.17|
| | |acc_norm|47.20|± | 2.23|
|piqa | 0|acc |83.35|± | 0.87|
| | |acc_norm|84.17|± | 0.85|
|winogrande | 0|acc |78.22|± | 1.16|
Average: 75.69%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |47.74|± | 1.75|
| | |mc2 |63.94|± | 1.53|
Average: 63.94%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|66.12|± | 2.47|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|41.47|± | 3.07|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.01|± | 2.19|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|31.40|± | 2.08|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.14|± | 1.60|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|56.00|± | 2.87|
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.00|± | 2.23|
|bigbench_navigate | 0|multiple_choice_grade|50.70|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.05|± | 1.02|
|bigbench_ruin_names | 0|multiple_choice_grade|45.54|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|26.05|± | 1.39|
|bigbench_snarks | 0|multiple_choice_grade|71.82|± | 3.35|
|bigbench_sports_understanding | 0|multiple_choice_grade|72.92|± | 1.42|
|bigbench_temporal_sequences | 0|multiple_choice_grade|44.20|± | 1.57|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.80|± | 1.19|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.23|± | 0.92|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|56.00|± | 2.87|
Average: 45.57%
Average score: 57.63%
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v8-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🧩 Configuration
The model uses ChatML and should work well with this (as it is merged from models which (mostly) saw ChatML templates in training).
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: johannhartmann/Brezn3
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
|
Hoga2/Working30daysworkout
|
Hoga2
| 2024-04-18T09:56:21Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-04-18T09:55:02Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/ไม่มีชื่อ 265_20240418164824.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Style of TOK
---
# Workout
<Gallery />
## Model description
Test
## Trigger words
You should use `Style of TOK` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Hoga2/Working30daysworkout/tree/main) them in the Files & versions tab.
|
Hoga2/Workingbody30days
|
Hoga2
| 2024-04-18T09:49:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-04-18T09:49:55Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/ไม่มีชื่อ 265_20240418164824.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Style of TOK
---
# Working30days
<Gallery />
## Model description
Type Tigger word Style of TOK
## Trigger words
You should use `Style of TOK` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Hoga2/Workingbody30days/tree/main) them in the Files & versions tab.
|
rootsec1/mistral-7B-it-aipi
|
rootsec1
| 2024-04-18T09:49:18Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T06:50:01Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Fine-tuning Mistral-7B-v0.1 on Symbolic Instruction Tuning Dataset
This repository contains the fine-tuned version of the `mistralai/Mistral-7B-v0.1` model on the `sail/symbolic-instruction-tuning` dataset. The objective of this fine-tuning process is to specialize the pre-trained model for improved performance on tasks that require understanding and processing symbolic instructions.
## Model Description
`Mistral-7B-v0.1` is a transformer-based language model pre-trained on a diverse corpus of text. Our fine-tuning process aims to leverage this pre-trained model and further optimize it for the symbolic instruction tuning task provided by the `sail/symbolic-instruction-tuning` dataset.
## Dataset
The `sail/symbolic-instruction-tuning` dataset is designed to test a model's ability to comprehend and execute symbolic instructions. It consists of a series of tasks that require the model to manipulate symbolic inputs according to specific instructions.
## Fine-tuning Process
The fine-tuning process involves the following steps:
1. **Environment Setup**: Ensure that your environment has all the necessary dependencies installed, including `transformers` and `datasets` from Hugging Face.
2. **Data Preparation**: Load the `sail/symbolic-instruction-tuning` dataset using the `datasets` library and prepare it for the training process, including any necessary preprocessing steps.
3. **Model Initialization**: Load the pre-trained `mistralai/Mistral-7B-v0.1` model and prepare it for fine-tuning.
4. **Training**: Fine-tune the model on the prepared dataset using an appropriate training script. This involves setting hyperparameters, training loops, and logging.
5. **Evaluation**: Evaluate the fine-tuned model's performance on a validation set to ensure that it has learned the task effectively.
6. **Saving and Sharing**: Save the fine-tuned model and upload it to the Hugging Face model hub for easy sharing and reuse.
## Usage
The fine-tuned model can be loaded from the Hugging Face model hub using the `transformers` library as follows:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "rootsec1/mistal-7B-it-aipi"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
inputs = tokenizer("Example input", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
abdullahfurquan/mistral-7b-final
|
abdullahfurquan
| 2024-04-18T09:47:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T09:47:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhah/videomae-base-finetuned-elderf1
|
minhah
| 2024-04-18T09:43:26Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-04-18T06:45:00Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-elderf1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-elderf1
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7031
- Accuracy: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 720
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7358 | 0.1 | 73 | 1.6923 | 0.3408 |
| 1.7163 | 1.1 | 146 | 1.6662 | 0.3373 |
| 1.7018 | 2.1 | 219 | 1.6378 | 0.3408 |
| 1.7334 | 3.1 | 292 | 1.6563 | 0.3401 |
| 1.672 | 4.1 | 365 | 1.6568 | 0.2398 |
| 1.7095 | 5.1 | 438 | 1.6313 | 0.3387 |
| 1.7119 | 6.1 | 511 | 1.6309 | 0.3408 |
| 1.6981 | 7.1 | 584 | 1.6518 | 0.3289 |
| 1.7066 | 8.1 | 657 | 1.6313 | 0.3310 |
| 1.6476 | 9.09 | 720 | 1.6338 | 0.3289 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
vadhri/wav2vec2-base-finetuned-gtzan
|
vadhri
| 2024-04-18T09:41:19Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-04-16T10:25:21Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: music-genre-classifer-20-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.81
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# music-genre-classifer-20-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1602
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 2.0297 | 1.0 | 113 | 0.46 | 2.0056 |
| 1.6252 | 2.0 | 226 | 0.61 | 1.5821 |
| 1.4001 | 3.0 | 339 | 0.62 | 1.3967 |
| 1.0201 | 4.0 | 452 | 0.77 | 1.1288 |
| 1.0074 | 5.0 | 565 | 0.69 | 1.0933 |
| 0.8466 | 6.0 | 678 | 0.76 | 0.9162 |
| 0.6966 | 7.0 | 791 | 0.79 | 0.9122 |
| 0.5324 | 8.0 | 904 | 0.82 | 0.7715 |
| 0.6692 | 9.0 | 1017 | 1.0549 | 0.71 |
| 0.7181 | 10.0 | 1130 | 0.8758 | 0.76 |
| 0.5585 | 11.0 | 1243 | 1.0753 | 0.7 |
| 0.4479 | 12.0 | 1356 | 1.1517 | 0.7 |
| 0.3145 | 13.0 | 1469 | 0.8475 | 0.79 |
| 0.8197 | 14.0 | 1582 | 0.8852 | 0.78 |
| 0.4665 | 15.0 | 1695 | 1.0134 | 0.77 |
| 0.2371 | 16.0 | 1808 | 1.0250 | 0.75 |
| 0.3823 | 17.0 | 1921 | 0.9516 | 0.81 |
| 0.5478 | 18.0 | 2034 | 1.2008 | 0.77 |
| 0.3165 | 19.0 | 2147 | 1.0737 | 0.8 |
| 0.1403 | 20.0 | 2260 | 0.9801 | 0.83 |
| 0.2754 | 21.0 | 2373 | 1.0137 | 0.82 |
| 0.2649 | 22.0 | 2486 | 1.2249 | 0.77 |
| 0.0686 | 23.0 | 2599 | 1.3234 | 0.77 |
| 0.0073 | 24.0 | 2712 | 1.2360 | 0.8 |
| 0.0068 | 25.0 | 2825 | 1.1338 | 0.81 |
| 0.0058 | 26.0 | 2938 | 1.2976 | 0.79 |
| 0.0054 | 27.0 | 3051 | 1.1782 | 0.83 |
| 0.0047 | 28.0 | 3164 | 1.0677 | 0.84 |
| 0.0045 | 29.0 | 3277 | 1.1128 | 0.83 |
| 0.0036 | 30.0 | 3390 | 1.1602 | 0.81 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
abdullahfurquan/mistral-xyz1
|
abdullahfurquan
| 2024-04-18T09:38:43Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T09:38:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huylys12/Llama-2-resume-fine-tune
|
huylys12
| 2024-04-18T09:36:17Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-18T09:29:13Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: new_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
abdullahfurquan/mistral-7b_2000
|
abdullahfurquan
| 2024-04-18T09:35:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T09:35:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JayBDev/bert-finetuned-ner
|
JayBDev
| 2024-04-18T09:21:24Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-18T09:06:13Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9372
- Recall: 0.9519
- F1: 0.9445
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0735 | 1.0 | 1756 | 0.0664 | 0.9112 | 0.9376 | 0.9242 | 0.9818 |
| 0.0363 | 2.0 | 3512 | 0.0640 | 0.9358 | 0.9470 | 0.9414 | 0.9857 |
| 0.0213 | 3.0 | 5268 | 0.0626 | 0.9372 | 0.9519 | 0.9445 | 0.9865 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ShenaoZ/0.001_ablation_6iters_iter_4
|
ShenaoZ
| 2024-04-18T09:20:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_6iters_iter_3",
"base_model:finetune:ShenaoZ/0.001_ablation_6iters_iter_3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T08:39:44Z |
---
license: mit
base_model: ShenaoZ/0.001_ablation_6iters_iter_3
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_ablation_6iters_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_6iters_iter_4
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_6iters_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_6iters_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
adekhovich/a2c-PandaReachDense-v3
|
adekhovich
| 2024-04-18T09:20:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-18T09:15:48Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
youngwook-kim/open-llama-2-ko-7b-ko-sharegpt-finetuned-50steps
|
youngwook-kim
| 2024-04-18T09:19:29Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:beomi/open-llama-2-ko-7b",
"base_model:finetune:beomi/open-llama-2-ko-7b",
"license:mit",
"region:us"
] | null | 2024-04-18T09:17:03Z |
---
license: mit
base_model: beomi/open-llama-2-ko-7b
tags:
- generated_from_trainer
model-index:
- name: open-llama-2-ko-7b-ko-sharegpt-finetuned-50steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# open-llama-2-ko-7b-ko-sharegpt-finetuned-50steps
This model is a fine-tuned version of [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
aloychow/product-review-information-density-detection-distilbert
|
aloychow
| 2024-04-18T09:16:36Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-02T05:27:15Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: product-review-information-density-detection-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# product-review-information-density-detection-distilbert
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2972
- Accuracy: 0.8387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 67 | 0.5551 | 0.7438 |
| No log | 2.0 | 134 | 0.4422 | 0.8163 |
| No log | 3.0 | 201 | 0.4285 | 0.84 |
| No log | 4.0 | 268 | 0.4707 | 0.8263 |
| No log | 5.0 | 335 | 0.5597 | 0.825 |
| No log | 6.0 | 402 | 0.6377 | 0.8387 |
| No log | 7.0 | 469 | 0.7444 | 0.8363 |
| 0.2608 | 8.0 | 536 | 0.7492 | 0.8413 |
| 0.2608 | 9.0 | 603 | 0.7549 | 0.8387 |
| 0.2608 | 10.0 | 670 | 0.8264 | 0.845 |
| 0.2608 | 11.0 | 737 | 1.0370 | 0.8187 |
| 0.2608 | 12.0 | 804 | 0.9359 | 0.8313 |
| 0.2608 | 13.0 | 871 | 0.9810 | 0.8387 |
| 0.2608 | 14.0 | 938 | 1.0293 | 0.84 |
| 0.0251 | 15.0 | 1005 | 1.0647 | 0.8263 |
| 0.0251 | 16.0 | 1072 | 1.0693 | 0.83 |
| 0.0251 | 17.0 | 1139 | 1.0656 | 0.8425 |
| 0.0251 | 18.0 | 1206 | 1.1193 | 0.8313 |
| 0.0251 | 19.0 | 1273 | 1.1583 | 0.8187 |
| 0.0251 | 20.0 | 1340 | 1.1257 | 0.8387 |
| 0.0251 | 21.0 | 1407 | 1.1632 | 0.825 |
| 0.0251 | 22.0 | 1474 | 1.2419 | 0.8213 |
| 0.0108 | 23.0 | 1541 | 1.1635 | 0.84 |
| 0.0108 | 24.0 | 1608 | 1.1951 | 0.8287 |
| 0.0108 | 25.0 | 1675 | 1.1710 | 0.845 |
| 0.0108 | 26.0 | 1742 | 1.2204 | 0.83 |
| 0.0108 | 27.0 | 1809 | 1.2166 | 0.8413 |
| 0.0108 | 28.0 | 1876 | 1.2335 | 0.8363 |
| 0.0108 | 29.0 | 1943 | 1.2355 | 0.8363 |
| 0.007 | 30.0 | 2010 | 1.2423 | 0.8425 |
| 0.007 | 31.0 | 2077 | 1.2511 | 0.8425 |
| 0.007 | 32.0 | 2144 | 1.2563 | 0.84 |
| 0.007 | 33.0 | 2211 | 1.2501 | 0.8413 |
| 0.007 | 34.0 | 2278 | 1.2431 | 0.8375 |
| 0.007 | 35.0 | 2345 | 1.2553 | 0.8387 |
| 0.007 | 36.0 | 2412 | 1.2635 | 0.8425 |
| 0.007 | 37.0 | 2479 | 1.2970 | 0.835 |
| 0.0061 | 38.0 | 2546 | 1.2894 | 0.8375 |
| 0.0061 | 39.0 | 2613 | 1.2773 | 0.84 |
| 0.0061 | 40.0 | 2680 | 1.2836 | 0.84 |
| 0.0061 | 41.0 | 2747 | 1.2916 | 0.8375 |
| 0.0061 | 42.0 | 2814 | 1.2869 | 0.8387 |
| 0.0061 | 43.0 | 2881 | 1.3032 | 0.8287 |
| 0.0061 | 44.0 | 2948 | 1.3056 | 0.8413 |
| 0.0047 | 45.0 | 3015 | 1.2813 | 0.8438 |
| 0.0047 | 46.0 | 3082 | 1.2811 | 0.8413 |
| 0.0047 | 47.0 | 3149 | 1.2858 | 0.8413 |
| 0.0047 | 48.0 | 3216 | 1.2960 | 0.8387 |
| 0.0047 | 49.0 | 3283 | 1.2971 | 0.8387 |
| 0.0047 | 50.0 | 3350 | 1.2972 | 0.8387 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
IbtiHt/commentgpt-ft
|
IbtiHt
| 2024-04-18T09:12:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T09:12:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MelanieKoe/w2v2-base-pretrained_lr5e-5_at0.8_da0.3
|
MelanieKoe
| 2024-04-18T09:04:28Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-01T12:13:16Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2-base-pretrained_lr5e-5_at0.8_da0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-base-pretrained_lr5e-5_at0.8_da0.3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8781
- Wer: 0.7877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 24.822 | 17.86 | 250 | 8.9601 | 1.0 |
| 6.2153 | 35.71 | 500 | 5.9697 | 1.0 |
| 3.5607 | 53.57 | 750 | 5.5112 | 1.0 |
| 3.2305 | 71.43 | 1000 | 5.6378 | 1.0 |
| 2.9631 | 89.29 | 1250 | 5.4385 | 1.0 |
| 2.5463 | 107.14 | 1500 | 5.2806 | 0.9953 |
| 2.1207 | 125.0 | 1750 | 5.3065 | 0.9492 |
| 1.7629 | 142.86 | 2000 | 5.4365 | 0.9158 |
| 1.4631 | 160.71 | 2250 | 5.1678 | 0.8667 |
| 1.2158 | 178.57 | 2500 | 5.3097 | 0.8428 |
| 1.0507 | 196.43 | 2750 | 5.6917 | 0.8279 |
| 0.9326 | 214.29 | 3000 | 5.7407 | 0.8197 |
| 0.8245 | 232.14 | 3250 | 5.5588 | 0.8039 |
| 0.7415 | 250.0 | 3500 | 5.7107 | 0.7860 |
| 0.694 | 267.86 | 3750 | 5.8551 | 0.7971 |
| 0.6634 | 285.71 | 4000 | 5.8781 | 0.7877 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
gildead/mistral-aes-966
|
gildead
| 2024-04-18T09:01:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-18T08:16:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergheevadrian/group8-million-song-model
|
sergheevadrian
| 2024-04-18T09:00:46Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-04-18T09:00:05Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Smd-Arshad/Llama-python-finetuned
|
Smd-Arshad
| 2024-04-18T08:57:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-15T16:50:28Z |
---
license: llama2
---
# Llama2 fine tuned in Intel Hardware using peft and Lora
**Description :** Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors.
**Usage :** To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance.
**Use in Transformers**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Smd-Arshad/Llama-python-finetuned")
model = AutoModelForCausalLM.from_pretrained("Smd-Arshad/Llama-python-finetuned")
```
|
satyanshu404/gpt2-finetuned-justification-v3
|
satyanshu404
| 2024-04-18T08:54:08Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-18T08:21:14Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gpt2-finetuned-justification-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-justification-v3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2415
- Rouge1: 30.8957
- Rouge2: 13.5597
- Rougel: 22.4384
- Rougelsum: 28.2668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 338 | 0.1980 | 30.0775 | 13.8145 | 22.3863 | 28.0341 |
| 0.226 | 2.0 | 676 | 0.1972 | 28.9676 | 13.7684 | 21.8084 | 26.6768 |
| 0.1594 | 3.0 | 1014 | 0.2007 | 29.8576 | 13.3727 | 22.1581 | 27.5726 |
| 0.1594 | 4.0 | 1352 | 0.2071 | 32.2090 | 13.7848 | 22.8787 | 29.0171 |
| 0.1259 | 5.0 | 1690 | 0.2146 | 28.5240 | 13.5821 | 21.4908 | 26.2550 |
| 0.1046 | 6.0 | 2028 | 0.2211 | 26.1623 | 13.1641 | 21.5936 | 25.0346 |
| 0.1046 | 7.0 | 2366 | 0.2294 | 28.7169 | 13.4858 | 21.1068 | 26.1213 |
| 0.0894 | 8.0 | 2704 | 0.2355 | 30.8957 | 13.5597 | 22.4384 | 28.2668 |
| 0.0785 | 9.0 | 3042 | 0.2398 | 30.8957 | 13.5597 | 22.4384 | 28.2668 |
| 0.0785 | 10.0 | 3380 | 0.2415 | 30.8957 | 13.5597 | 22.4384 | 28.2668 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
lxsure/Sniper_29
|
lxsure
| 2024-04-18T08:48:31Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T08:13:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nayan4HF/code-mistral-7b-text-to-python
|
Nayan4HF
| 2024-04-18T08:44:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-29T23:24:39Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- generator
model-index:
- name: code-mistral-7b-text-to-python
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-mistral-7b-text-to-python
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
bgsmagnuson/tiny-llama-code-feedback
|
bgsmagnuson
| 2024-04-18T08:42:35Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T08:42:31Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-llama-code-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-llama-code-feedback
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
JoonkyuBest/polyglot-ko-1.3b-lite1.0
|
JoonkyuBest
| 2024-04-18T08:39:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-17T07:32:47Z |
---
license: apache-2.0
language:
- ko
tags:
- pytorch
- causal-lm
---
# polyglot-ko-1.3b-lite1.0
- [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b/)를 기반으로, 미세조정한 모델
- PEFT 기법 중에 하나인, QLoRA로 미세조정
## 목적
사양이 높지 않은 일반 노트북에서, 한국어 LLM을 연구, 개발할 수 있는 환경을 구축해 본 것입니다.<br/>
AI개발 속도가 너무 빠르게 진행되어, 과거 호환성 문제를 해결하며, 구축해야만 했습니다.<br/>
자신의 모델이 응답속도가 늦거나, 엉뚱한 답변을 생성하는 것은, LLM에 대한 지식이 부족해서 발생하는 것입니다.<br/>
본 소스를 기반으로 삼아, Windows 개발자 분들이 AI 개발의 문턱에 빠르게 다가설 수 있기를 바랍니다.
본 모델의 개발 프로젝트 소스는 [GitHub](https://github.com/JoonkyuChoi/polyglot-ko-1.3b-lite)에 오픈하였습니다.
## 구현 환경
RAM은 거의 소모하지 않으며, VRAM은 2.7 GB를 소비합니다.
```
- System
OS Windows 11 Home
RAM 16 GB
VRAM 2.7 GB
Graphic Card GeForce RTX 3060(GPU=1, VRAM=6GB)
- packages
cuda 12.1.105
cudnn 8.0
pytorch 2.2.2
python 3.10.14
conda 24.3.0
accelerate 0.29.2
bitsandbytes 0.43.0
gradio 4.26.0
tokenizers 0.15.2
transformers 4.39.3
wandb 0.16.6
- training parameters
epochs 5
batch_size 16
micro_batch_size 4
learning_rate 1e-3
batch_size 3
lora_r 8
lora_alpha 16
lora_dropout 0.05
lora_target_modules query_key_value
```
## 훈련 데이터셋
[KoAlpaca_v1.1a_textonly.json](https://github.com/Beomi/KoAlpaca/blob/main/train_v1.1b/KoAlpaca_v1.1a_textonly.json) 파일에서 1000개 샘플만 추출하여, 학습을 빠르게 진행시키며 가장 효율적인 속성으로, [훈련 > 병합 > 저장 > 추론] 단계를 진행시킨 모델입니다.<br/>
실제 사용한 [데이터셋](./assets/KoAlpaca_v1.1a_textonly.json)도 포함시켰습니다.
## 스크린 샷
두 그래프에 차이점을 확인하세요.<br/>
e3b16은 epochs=3, batch_size=16을 의미합니다.<br/>
e5b16은 epochs=5, batch_size=16을 의미합니다.
### 훈련 그래프
[](./assets/gradio-train.png)
### 평가 그래프
[](./assets/gradio-eval.png)
### 추론(생성) 프롬프터
[](./assets/prompter.png)
## 라이센스
[Apache 2.0](./LICENSE) 라이센스를 따릅니다.<br/>
라이센스에 따라, 주의사항을 지켜주시기 바랍니다.
|
goncaavci/peft-llama-incident-factor-trail9
|
goncaavci
| 2024-04-18T08:35:55Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T08:30:11Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** goncaavci
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
iayrots/swin-tiny-patch4-window7-224-finetuned-eurosat
|
iayrots
| 2024-04-18T08:34:55Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-04-18T08:15:49Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0601
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2994 | 1.0 | 190 | 0.1234 | 0.9604 |
| 0.1853 | 2.0 | 380 | 0.0705 | 0.9741 |
| 0.158 | 3.0 | 570 | 0.0601 | 0.98 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
|
HuggingFaceH4
| 2024-04-18T08:31:56Z | 618 | 265 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"arxiv:2403.07691",
"arxiv:2311.07911",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"base_model:finetune:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T16:00:24Z |
---
license: apache-2.0
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- trl
- orpo
- generated_from_trainer
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
model-index:
- name: zephyr-orpo-141b-A35b-v0.1
results: []
inference:
parameters:
temperature: 0.7
---
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 141B-A39B
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A39B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A39B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs.
> [!NOTE]
> This model was trained collaboratively between Argilla, KAIST, and Hugging Face
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 39B active parameters. (We initially made a small error in calculating the number of active parameters for the model ID. The model card states the correct number.) Fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English.
- **License:** Apache 2.0
- **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized
## Performance
Zephyr 141B-A39B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.
| Model | MT Bench | IFEval | BBH | AGIEval |
|-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:|
| [zephyr-orpo-141b-A39b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 |
| [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 |
| [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 |
## Intended uses & limitations
The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "You are Zephyr, a helpful assistant.",
},
{"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr 141B-A39B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
## Citation
If you find Zephyr 141B-A39B is useful in your work, please cite the ORPO paper:
```
@misc{hong2024orpo,
title={ORPO: Monolithic Preference Optimization without Reference Model},
author={Jiwoo Hong and Noah Lee and James Thorne},
year={2024},
eprint={2403.07691},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
You may also wish to cite the creators of this model:
```
@misc{zephyr_141b,
author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall},
title = {Zephyr 141B A39B},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}}
}
```
|
vaibhav1/Mongolian_GPT_FakeNews_Comprehendo
|
vaibhav1
| 2024-04-18T08:31:09Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:knowledgator/comprehend_it-base",
"base_model:finetune:knowledgator/comprehend_it-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-05T14:09:56Z |
---
license: apache-2.0
base_model: knowledgator/comprehend_it-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Mongolian_GPT_FakeNews_Comprehendo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mongolian_GPT_FakeNews_Comprehendo
This model is a fine-tuned version of [knowledgator/comprehend_it-base](https://huggingface.co/knowledgator/comprehend_it-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3175
- Accuracy: 0.8393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1788 | 1.0 | 11 | 0.7540 | 0.8571 |
| 0.2589 | 2.0 | 22 | 0.6399 | 0.8571 |
| 0.1117 | 3.0 | 33 | 0.6795 | 0.8125 |
| 0.0829 | 4.0 | 44 | 0.6606 | 0.8571 |
| 0.0037 | 5.0 | 55 | 0.7375 | 0.8482 |
| 0.0017 | 6.0 | 66 | 0.8388 | 0.8393 |
| 0.0009 | 7.0 | 77 | 0.8872 | 0.8393 |
| 0.0007 | 8.0 | 88 | 0.9371 | 0.8393 |
| 0.0005 | 9.0 | 99 | 0.9949 | 0.8393 |
| 0.0004 | 10.0 | 110 | 1.0329 | 0.8393 |
| 0.0003 | 11.0 | 121 | 1.0626 | 0.8393 |
| 0.0003 | 12.0 | 132 | 1.0800 | 0.8393 |
| 0.0002 | 13.0 | 143 | 1.0993 | 0.8393 |
| 0.0002 | 14.0 | 154 | 1.1330 | 0.8393 |
| 0.0002 | 15.0 | 165 | 1.1925 | 0.8393 |
| 0.0001 | 16.0 | 176 | 1.2286 | 0.8393 |
| 0.0001 | 17.0 | 187 | 1.2468 | 0.8393 |
| 0.0001 | 18.0 | 198 | 1.2586 | 0.8393 |
| 0.0001 | 19.0 | 209 | 1.2686 | 0.8393 |
| 0.0001 | 20.0 | 220 | 1.2758 | 0.8393 |
| 0.0001 | 21.0 | 231 | 1.2836 | 0.8393 |
| 0.0001 | 22.0 | 242 | 1.2914 | 0.8393 |
| 0.0001 | 23.0 | 253 | 1.2978 | 0.8393 |
| 0.0001 | 24.0 | 264 | 1.3027 | 0.8393 |
| 0.0001 | 25.0 | 275 | 1.3070 | 0.8393 |
| 0.0001 | 26.0 | 286 | 1.3106 | 0.8393 |
| 0.0001 | 27.0 | 297 | 1.3131 | 0.8393 |
| 0.0001 | 28.0 | 308 | 1.3156 | 0.8393 |
| 0.0001 | 29.0 | 319 | 1.3170 | 0.8393 |
| 0.0001 | 30.0 | 330 | 1.3175 | 0.8393 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-04-18T08:30:14Z | 70,057 | 32 | null |
[
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"16-bit",
"GGUF",
"mixtral",
"moe",
"text-generation",
"fr",
"en",
"es",
"it",
"de",
"base_model:mistralai/Mixtral-8x22B-Instruct-v0.1",
"base_model:quantized:mistralai/Mixtral-8x22B-Instruct-v0.1",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-04-17T17:29:25Z |
---
license: apache-2.0
base_model: mistralai/Mixtral-8x22B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
model_name: Mixtral-8x22B-Instruct-v0.1-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- 16-bit
- GGUF
- mixtral
- moe
language:
- fr
- en
- es
- it
- de
---
# Mixtral-8x22B-Instruct-v0.1-GGUF
The GGUF and quantized models here are based on [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) model
## How to download
You can download only the quants you need instead of cloning the entire repository as follows:
```
huggingface-cli download MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF --local-dir . --include '*Q2_K*gguf'
```
## Load sharded model
`llama_load_model_from_file` will detect the number of files and will load additional tensors from the rest of files.
```sh
llama.cpp/main -m Mixtral-8x22B-Instruct-v0.1.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e
```
Original README
---
# Model Card for Mixtral-8x22B-Instruct-v0.1
The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1).
## Run the model
```python
from transformers import AutoModelForCausalLM
from mistral_common.protocol.instruct.messages import (
AssistantMessage,
UserMessage,
)
from mistral_common.protocol.instruct.tool_calls import (
Tool,
Function,
)
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
device = "cuda" # the device to load the model onto
tokenizer_v3 = MistralTokenizer.v3()
mistral_query = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris"),
],
model="test",
)
encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer
decoded = sp_tokenizer.decode(generated_ids[0])
print(decoded)
```
# Instruct tokenizer
The HuggingFace tokenizer included in this release should match our own. To compare:
`pip install mistral-common`
```py
from mistral_common.protocol.instruct.messages import (
AssistantMessage,
UserMessage,
)
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.normalize import ChatCompletionRequest
from transformers import AutoTokenizer
tokenizer_v3 = MistralTokenizer.v3()
mistral_query = ChatCompletionRequest(
messages=[
UserMessage(content="How many experts ?"),
AssistantMessage(content="8"),
UserMessage(content="How big ?"),
AssistantMessage(content="22B"),
UserMessage(content="Noice 🎉 !"),
],
model="test",
)
hf_messages = mistral_query.model_dump()['messages']
tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens
tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1')
tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True)
assert tokenized_hf == tokenized_mistral
```
# Function calling and special tokens
This tokenizer includes more special tokens, related to function calling :
- [TOOL_CALLS]
- [AVAILABLE_TOOLS]
- [/AVAILABLE_TOOLS]
- [TOOL_RESULT]
- [/TOOL_RESULTS]
If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](https://github.com/mistralai/mistral-common/blob/main/src/mistral_common/tokens/tokenizers/sentencepiece.py#L299).
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,
Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,
Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,
Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,
Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,
Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,
Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,
Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,
Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,
Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,
Valera Nemychnikova, William El Sayed, William Marshall
---
|
ltuzova/classification_tapt_unipelt_adapter
|
ltuzova
| 2024-04-18T08:29:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null | 2024-04-17T18:23:46Z |
---
tags:
- roberta
- adapter-transformers
datasets:
- BigTMiami/amazon_helpfulness
---
# Adapter `ltuzova/classification_tapt_unipelt_adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("ltuzova/classification_tapt_unipelt_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
michaelw37/sc29
|
michaelw37
| 2024-04-18T08:21:34Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-13T05:20:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TungLe7661/BertPEFT650v5
|
TungLe7661
| 2024-04-18T08:18:29Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-04-18T05:51:02Z |
'eval_accuracy': 0.68516, 'eval_f1': 0.6844490693226439, 'eval_precision': 0.6839923350377614, 'eval_recall': 0.68516,
5e-5, 10 epoch
|
satyanshu404/gpt2-finetuned-justification-v2
|
satyanshu404
| 2024-04-18T08:16:25Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-18T08:08:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-justification-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-justification-v2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 338 | 0.1999 | 32.9103 | 14.6197 | 24.2481 | 30.4464 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
EinsZwo/mlm_mixed_supertagging_fullset_justbert_alpha05
|
EinsZwo
| 2024-04-18T08:16:23Z | 159 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-04-18T08:15:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quirky-lats-at-mats/LAT_Unlearned_L8_Eps1_Genericized-PCA_WHP-Labels
|
quirky-lats-at-mats
| 2024-04-18T08:12:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-18T08:08:58Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
LLaMA model trained with LAT on WHP data: defense labels are genericized HP text, adversary labels are accurate next-token HP (same HP sentences as WHP paper).
LAT performed on Layer 8 with Epsilon 1 and all layers after trained with rank-8 lora. Adversary operating in PCA-whitened space, with PCA basis derived from genericized text at idiosyncratic harry potter label indices (e.g. only at "Harry" -> "John" token.
SFT data is "VH1213141516/benign_data_v1", num_steps=100, max_batch_per_acc=4.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
xrchen11/xlm-roberta-base-finetuned-panx-de
|
xrchen11
| 2024-04-18T08:12:23Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-11T02:51:03Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rhaymison/Mistral-portuguese-luana-7b-Mathematics
|
rhaymison
| 2024-04-18T08:10:02Z | 22 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"portuguese",
"math",
"mathematics",
"matematica",
"conversational",
"pt",
"dataset:rhaymison/orca-math-portuguese-64k",
"base_model:rhaymison/Mistral-portuguese-luana-7b",
"base_model:finetune:rhaymison/Mistral-portuguese-luana-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-05T23:13:55Z |
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portuguese
- math
- mathematics
- matematica
base_model: rhaymison/Mistral-portuguese-luana-7b
datasets:
- rhaymison/orca-math-portuguese-64k
pipeline_tag: text-generation
model-index:
- name: Mistral-portuguese-luana-7b-Mathematics
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 56.68
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 45.9
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 37.9
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 89.36
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 74.78
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 74.87
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 76.39
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 67.46
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 49.03
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-Mathematics
name: Open Portuguese LLM Leaderboard
---
# Mistral-portuguese-luana-7b-Mathematics
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/luana-math.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Luana Mathematics is a tuned model of the Luana-7b based on the Mistral 7b architecture.
This Model has been fine-tuned with 64k math problems and solutions derived from Microsoft's Orca-Math.
# How to use
You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
Important points like these help models (even smaller models like 7b) to perform much better.
```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-Mathematics", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/Mistral-portuguese-luana-7b-Mathematics")
model.eval()
```
You can use with Pipeline but in this example i will use such as Streaming
```python
inputs = tokenizer([f"""<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### instrução: Se eu emprestei 100 reais para o meu irmão para que ele me pague em 3 vezes. O juros do empréstimo é 5% do valor do emprestimo por mes. Quanto meu irmão terá que me pagar no total?
[/INST]"""], return_tensors="pt")
inputs.to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=200)
#Para calcular o valor total que seu irmão terá que pagar, precisamos primeiro calcular o interesse do empréstimo e depois adicioná -lo ao valor principal.
#O interesse do empréstimo é de 5% do valor do empréstimo por mês.Então, para cada mês, o interesse será:
#5% de 100 reais = 0,05 * 100 = 5 reais
#Como o empréstimo é devolvido em 3 vezes, o interesse total será:
#5 reais/mês * 3 meses = 15 reais
#Agora, adicionamos o interesse ao valor principal para encontrar o valor total que seu irmão terá que pagar:
#100 reais (valor principal) + 15 reais (juros) = 115 reais
#Portanto, seu irmão terá que pagar 115 reais no total.
```
```python
inputs = tokenizer([f"""<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### instrução: João Esta pilotando sua bicicleta a uma velocidade de 20 km por hora. Se João esta fazendo uma Viagem da cidade de Venda nova a cidade de Independencia que possuem a distancia entre si de 320km.
Quanto tempo João vai demorar para ir de uma cidade a outra?
[/INST]"""], return_tensors="pt")
inputs.to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=200)
#Para descobrir quanto tempo levará João para viajar da cidade de venda nova para a cidade de Independência, precisamos dividir a distância total que ele precisa percorrer pela velocidade em que ele está andando.
#Distância total = 320 km
#Velocidade de João = 20 km/h
#Tempo = distância / velocidade
#Tempo = 320 km / 20 km / h
#Tempo = 16 horas
#Então, levará João 16 horas para viajar da cidade de venda nova para a cidade de Independência.
```
```python
inputs = tokenizer([f"""<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### instrução: Qual a area de um triangulo equilatero de 4 cm de base por 2 de altura ?
[/INST]"""], return_tensors="pt")
inputs.to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=200)
#A área de um triângulo equilátero é dada pela fórmula:
#Área = (1/2) * base * altura
#Dado que a base é de 4 cm e a altura é de 2 cm, podemos conectar esses valores à fórmula:
#Área = (1/2) * 4 cm * 2 cm
#Área = (1/2) * 8 cm²
#Área = 4 cm²
#Portanto, a área do triângulo equilátero é de 4 centímetros quadrados.
```
If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization.
For the complete model in colab you will need the A100.
If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.
# 4bits
```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb_config,
device_map={"": 0}
)
```
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/Mistral-portuguese-luana-7b-Mathematics) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|--------|
|Average |**63.6**|
|ENEM Challenge (No Images)| 56.68|
|BLUEX (No Images) | 45.90|
|OAB Exams | 37.90|
|Assin2 RTE | 89.36|
|Assin2 STS | 74.78|
|FaQuAD NLI | 74.87|
|HateBR Binary | 76.39|
|PT Hate Speech Binary | 67.46|
|tweetSentBR | 49.03|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/heleno-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
|
Angelectronic/llama2-chat_10000_200
|
Angelectronic
| 2024-04-18T08:06:56Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"base_model:adapter:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T00:51:24Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/llama-2-7b-chat-bnb-4bit
model-index:
- name: llama2-chat_10000_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-chat_10000_200
This model is a fine-tuned version of [unsloth/llama-2-7b-chat-bnb-4bit](https://huggingface.co/unsloth/llama-2-7b-chat-bnb-4bit) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.379 | 0.31 | 48 | 1.0400 |
| 1.059 | 0.61 | 96 | 1.0091 |
| 1.0438 | 0.92 | 144 | 0.9990 |
| 0.9934 | 1.23 | 192 | 0.9968 |
| 0.9749 | 1.54 | 240 | 0.9926 |
| 0.9778 | 1.84 | 288 | 0.9862 |
| 0.9443 | 2.15 | 336 | 1.0046 |
| 0.8913 | 2.46 | 384 | 1.0017 |
| 0.8908 | 2.76 | 432 | 0.9996 |
| 0.8708 | 3.07 | 480 | 1.0339 |
| 0.7958 | 3.38 | 528 | 1.0386 |
| 0.8025 | 3.69 | 576 | 1.0386 |
| 0.8099 | 3.99 | 624 | 1.0386 |
| 0.7191 | 4.3 | 672 | 1.0913 |
| 0.7138 | 4.61 | 720 | 1.0926 |
| 0.723 | 4.92 | 768 | 1.0901 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2
|
ambasmk/shawgpt-ft
|
ambasmk
| 2024-04-18T08:06:47Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T08:06:35Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: shawgpt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5952 | 0.92 | 3 | 3.9703 |
| 4.0562 | 1.85 | 6 | 3.4469 |
| 3.4805 | 2.77 | 9 | 2.9945 |
| 2.2662 | 4.0 | 13 | 2.5629 |
| 2.6825 | 4.92 | 16 | 2.3030 |
| 2.3576 | 5.85 | 19 | 2.1146 |
| 2.123 | 6.77 | 22 | 1.9594 |
| 1.5056 | 8.0 | 26 | 1.9015 |
| 1.9725 | 8.92 | 29 | 1.8699 |
| 1.3731 | 9.23 | 30 | 1.8604 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Sumail/Ame14
|
Sumail
| 2024-04-18T08:00:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sumail/Ame10",
"base_model:merge:Sumail/Ame10",
"base_model:tomaszki/stablelm-37",
"base_model:merge:tomaszki/stablelm-37",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T07:58:50Z |
---
base_model:
- tomaszki/stablelm-37
- Sumail/Ame10
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/stablelm-37](https://huggingface.co/tomaszki/stablelm-37)
* [Sumail/Ame10](https://huggingface.co/Sumail/Ame10)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sumail/Ame10
layer_range: [0, 24]
- model: tomaszki/stablelm-37
layer_range: [0, 24]
merge_method: slerp
base_model: tomaszki/stablelm-37
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
mylas02/XLNET_SQuaD_FineTuned_v2
|
mylas02
| 2024-04-18T07:57:13Z | 56 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-30T00:10:49Z |
---
license: mit
base_model: XLNet-base-cased
tags:
- generated_from_trainer
model-index:
- name: XLNET_SQuaD_FineTuned_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLNET_SQuaD_FineTuned_v2
This model is a fine-tuned version of [XLNet-base-cased](https://huggingface.co/XLNet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
KnutJaegersberg/Luminex-34B-v0.1-exl2-8.0bpw
|
KnutJaegersberg
| 2024-04-18T07:54:59Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:jondurbin/bagel-34b-v0.2",
"base_model:finetune:jondurbin/bagel-34b-v0.2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-17T14:52:36Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
base_model: jondurbin/bagel-34b-v0.2
---

# ConvexAI/Luminex-34B-v0.2
This model is [SimpleSmaug-34b](https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta) with LaserRMT applied.
[Join our Discord!](https://discord.gg/rJXGjmxqzS)
### Evaluation Results
Coming Soon
### Contamination Results
Coming Soon
|
Shalie/LizeHelestaPonyXL
|
Shalie
| 2024-04-18T07:52:57Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:AstraliteHeart/pony-diffusion-v6",
"base_model:adapter:AstraliteHeart/pony-diffusion-v6",
"license:other",
"region:us"
] |
text-to-image
| 2024-04-18T07:49:16Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white
skirt, white jacket, frills
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04844-2870709108-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament,
blue ski.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, gigantic
breasts, wide hips, <lora:splizeHelestaXLPony:1> lize1st, hair ornament,
blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up
boots, <lora:suruga-Style-PonyXL-DoRA-v1.1:1>
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04881-410302042-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, gigantic breasts, wide hips,
_lora_splizeHelestaXLPony_1_ lize.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lizevlk, hair ornament, armor, boots, navel,
thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04880-602300793-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lizevlk, hair ornament,
armor, b.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lizevlk, hair ornament, armor, boots, navel,
thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04879-37165688-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lizevlk, hair ornament,
armor, b.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker,
belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top,
black shorts, short shorts, socks, sneakers
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04877-2448642232-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap,
jewelry,.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker,
belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top,
black shorts, short shorts, socks, sneakers, arms at sides, expressionless,
eye contact, leaning forward, looking at another, profile, solo, standing,
balloon
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04875-1111574645-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap,
jewelry,.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker,
belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top,
black shorts, short shorts, socks, sneakers, hands up, head rest, parted
lips, solo, blue sky, cloud, cloudy sky, dutch angle, fox shadow puppet,
outdoors, sky, statue, sunset, torii
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04874-1862930065-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap,
jewelry,.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize5st, baseball cap, jewelry, black choker,
belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top,
black shorts, short shorts, socks, sneakers, :o, blush, hands up, leaning
forward, looking at viewer, open mouth, solo, full body, lighthouse,
umbrella, waves, white background
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04872-2827486028-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize5st, baseball cap,
jewelry,.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school
uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan,
kneehighs, socks, glasses, blush, closed mouth, hands up, looking at viewer,
smile, solo, bird, blue theme, cloud, day, dutch angle, outdoors, railing,
sky
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04870-2015792205-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament,
blue rib.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school
uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan,
kneehighs, socks, glasses, :|, closed mouth, crossed arms, head tilt,
holding, holding animal, holding cat, looking at viewer, solo, standing,
cabbage, cowboy shot, food, groceries, indoors, jirai kei, meat, shopping,
shopping cart, spring onion, supermarket
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04866-959972789-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament,
blue rib.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school
uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan,
kneehighs, socks, glasses, english text, looking at viewer, object hug,
solo, squatting
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04863-2919052021-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament,
blue rib.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize4st, hair ornament, blue ribbon, school
uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan,
kneehighs, socks, glasses, cowboy shot, grey background, simple background,
closed mouth, crying, crying with eyes open, hands on own face, looking at
viewer, solo, tears
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04860-3159198852-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize4st, hair ornament,
blue rib.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white
gloves, fur-trimmed cloak, white cloak, grass, outdoors, school, signature,
blush, closed mouth, looking at viewer, smile, solo, standing
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04858-2415720447-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara,
earrings, blue d.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white
gloves, fur-trimmed cloak, white cloak, close-up, flower, painting (medium),
portrait, simple background, traditional media, watercolor (medium), :d,
blush, looking at viewer, open mouth, smile, solo, standing, standing on one
leg
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04857-3581127766-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara,
earrings, blue d.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize3st, tiara, earrings, blue dress, white
gloves, fur-trimmed cloak, white cloak, food, outdoors, roasted sweet
potato, signature, sunset, upper body, cigarette, hand up, holding, holding
cigarette, looking away, smoking, solo
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04854-1405522214-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize3st, tiara,
earrings, blue d.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue
flower, sun hat, off-shoulder dress, bare shoulders, sandals, artist name,
from side, green theme, leaf, light particles, nature, outdoors, plant,
sparkle, wading, water, wide shot, blush, closed mouth, knees up, sitting,
solo
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04851-1524963148-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament,
hair flo.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue
flower, sun hat, off-shoulder dress, bare shoulders, sandals, bird,
birdcage, black background, cage, floating hair, flower, pink flower,
portrait, white flower, closed mouth, holding, holding hair, looking at
viewer, sitting, smile, solo, wariza
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04850-1878376897-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament,
hair flo.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize2st, hair ornament, hair flower, blue
flower, sun hat, off-shoulder dress, bare shoulders, sandals, blue
background, border, dated, feet out of frame, outside border, signature,
simple background, split mouth, white border, blush, hands up, looking at
viewer, solo
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04849-2290115010-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize2st, hair ornament,
hair flo.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white
skirt, white jacket, frills, beach, bone, from side, full body, horizon,
ocean, outdoors, pillar, plant, ruins, string of flags, vines, wading,
blush, hands up, holding, looking at viewer, parted lips, smile, solo
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04848-3524184601-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament,
blue ski.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white
skirt, white jacket, frills, book, bookshelf, handheld game console,
indoors, nintendo switch, arm support, hand up, parted lips, sitting, solo
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04847-512832944-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament,
blue ski.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white
skirt, white jacket, frills, blue thighhighs, lace-up boots
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04842-3427279500-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament,
blue ski.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl,
<lora:splizeHelestaXLPony:1> lize1st, hair ornament, blue skirt, white
skirt, white jacket, frills, blue thighhighs, lace-up boots
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/04839-2880076457-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1girl, _lora_splizeHelestaXLPony_1_ lize1st, hair ornament,
blue ski.png
base_model: AstraliteHeart/pony-diffusion-v6
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
# Lize Helesta - NIJISANJI
<Gallery />
## Model description
Lize Helesta From Nijisanji!
Trained on 6 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
Debut Outfit: `lize1st, hair ornament, blue skirt, white skirt, white jacket, frills, blue thighhighs, lace-up boots`
Second Outfit: `lize2st, hair ornament, hair flower, blue flower, sun hat, off-shoulder dress, bare shoulders, sandals`
Third Outfit: `lize3st, tiara, earrings, blue dress, white gloves, fur-trimmed cloak, white cloak`
Fourth Outfit: `lize4st, hair ornament, blue ribbon, school uniform, blue serafuku, white sailor collar, blue cardigan, open cardigan, kneehighs, socks, glasses`
Fifth Outfit: `lize5st, baseball cap, jewelry, black choker, belt, blue jacket, open jacket, white shirt, sleevless shirt, tank top, black shorts, short shorts, socks, sneakers`
Valkyrie Outfit: `lizevlk, hair ornament, armor, boots, navel, thighhighs`
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shalie/LizeHelestaPonyXL/tree/main) them in the Files & versions tab.
### License
This LoRA model is provided under the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license.
## Restrictions:
- **Usage in Generation Services**: You are not allowed to use the model in any generation services without proper permission from the original creator.
- **Commercial Usage**: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator.
|
allknowingroger/WestLakeLaser-12B-MoE
|
allknowingroger
| 2024-04-18T07:42:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/PrometheusLaser-7B-slerp",
"senseable/WestLake-7B-v2",
"base_model:allknowingroger/PrometheusLaser-7B-slerp",
"base_model:merge:allknowingroger/PrometheusLaser-7B-slerp",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-18T07:35:40Z |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- allknowingroger/PrometheusLaser-7B-slerp
- senseable/WestLake-7B-v2
base_model:
- allknowingroger/PrometheusLaser-7B-slerp
- senseable/WestLake-7B-v2
---
# WestLakeLaser-12B-MoE
WestLakeLaser-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/PrometheusLaser-7B-slerp](https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
## 🧩 Configuration
```yaml
base_model: allknowingroger/PrometheusLaser-7B-slerp
experts:
- source_model: allknowingroger/PrometheusLaser-7B-slerp
positive_prompts: ["what"]
- source_model: senseable/WestLake-7B-v2
positive_prompts: ["why"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/WestLakeLaser-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ilsp/Meltemi-7B-Instruct-v1-GGUF
|
ilsp
| 2024-04-18T07:38:22Z | 35 | 14 |
gguf
|
[
"gguf",
"mistral",
"finetuned",
"quantized",
"GGUF",
"el",
"en",
"base_model:ilsp/Meltemi-7B-Instruct-v1",
"base_model:finetune:ilsp/Meltemi-7B-Instruct-v1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-27T08:22:47Z |
---
license: apache-2.0
language:
- el
- en
tags:
- finetuned
- quantized
- GGUF
model_creator: ilsp
inference: true
base_model: ilsp/Meltemi-7B-Instruct-v1
library_name: gguf
quantized_by: ilsp
---
# Meltemi 7B Instruct Quantized models

## Description
In this repository you can find quantised GGUF variants of [Meltemi-7B-Instruct-v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1) model, created using [llama.cpp](https://github.com/ggerganov/llama.cpp) at the [Institute for Language and Speech Processing](https://www.athenarc.gr/en/ilsp) of [Athena Research & Innovation Center](https://www.athenarc.gr/en).
## Provided files (Use case column taken from the llama.cpp documentation)
Based on the information
| Name | Quant method | Bits | Size | Appr. RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [meltemi-instruct-v1_q3_K_M.bin](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-instruct-v1_q3_K_M.bin) | Q3_K_M | 3 | 3.67 GB| 6.45 GB | small, high quality loss |
| [meltemi-instruct-v1_q5_K_M.bin](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1-GGUF/blob/main/meltemi-instruct-v1_q5_K_M.bin) | Q5_K_M | 5 | 5.31 GB| 8.1 GB | large, low quality loss - recommended |
# Instruction format
The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format:
```
<s><|system|>
Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
<|user|>
Πες μου αν έχεις συνείδηση.</s>
<|assistant|>
```
# Loading the model with llama_cpp
Install llama-cpp-python (set -DLLAMA_CUBLAS=on if you want to use your GPU for inference)
```
$env:CMAKE_ARGS="-DLLAMA_CUBLAS=on"
pip install llama-cpp-python
```
```python
from llama_cpp import Llama
llm = Llama(
model_path="./meltemi-instruct-v1_q5_K_M.bin", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
system = "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."
input_text = "Πες μου αν έχεις συνείδηση."
prompt = f"""
<|system|>
{system}
</s>
<|user|>
{input_text}
</s>
<|assistant|>
"""
output = llm(
prompt,
max_tokens=1024,
stop=["</s>"],
echo=True
)
output_text = output['choices'][0]['text'][len(prompt):].strip()
```
# Ethical Considerations
This model has not been aligned with human preferences, and therefore might generate misleading, harmful, or toxic content.
# Acknowledgements
The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
|
ASaska/Llama-2-7b-chat-hf-ft
|
ASaska
| 2024-04-18T07:31:52Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-15T19:15:40Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
microsoft/rho-math-7b-interpreter-v0.1
|
microsoft
| 2024-04-18T07:26:43Z | 111 | 35 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"nlp",
"math",
"en",
"arxiv:2404.07965",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-11T16:58:00Z |
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.