modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 00:46:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 00:45:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lxsure/Sniper_06 | lxsure | 2024-03-28T06:35:31Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T06:29:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JY623/KoSOLAR-v2.1 | JY623 | 2024-03-28T06:32:27Z | 2,249 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v3.0",
"base_model:merge:chihoonlee10/T3Q-ko-solar-dpo-v3.0",
"base_model:rrw-x2/KoSOLAR-10.7B-v1.0",
"base_model:merge:rrw-x2/KoSOLAR-10.7B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T06:14:05Z | ---
base_model:
- rrw-x2/KoSOLAR-10.7B-v1.0
- chihoonlee10/T3Q-ko-solar-dpo-v3.0
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [rrw-x2/KoSOLAR-10.7B-v1.0](https://huggingface.co/rrw-x2/KoSOLAR-10.7B-v1.0)
* [chihoonlee10/T3Q-ko-solar-dpo-v3.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v3.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-ko-solar-dpo-v3.0
layer_range: [0, 48]
- model: rrw-x2/KoSOLAR-10.7B-v1.0
layer_range: [0, 48]
merge_method: slerp
base_model: chihoonlee10/T3Q-ko-solar-dpo-v3.0
parameters:
t: 0.2
dtype: bfloat16
``` |
chihoonlee10/T3Q-ko-solar-dpo-v5.0 | chihoonlee10 | 2024-03-28T06:24:17Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T21:07:45Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---

# T3Q-ko-solar-dpo-v5.0
## This model is a version of krevas/SOLAR-10.7B that has been fine-tuned with DPO.
## Model Developers Chihoon Lee(chihoonlee10), T3Q |
JY623/KoSOLAR-v2.0 | JY623 | 2024-03-28T06:14:53Z | 2,252 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Deepnoid/deep-solar-Rev-v3.0.4",
"base_model:merge:Deepnoid/deep-solar-Rev-v3.0.4",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v3.0",
"base_model:merge:chihoonlee10/T3Q-ko-solar-dpo-v3.0",
"base_model:davidkim205/nox-solar-10.7b-v4",
"base_model:merge:davidkim205/nox-solar-10.7b-v4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-24T08:14:55Z | ---
base_model:
- chihoonlee10/T3Q-ko-solar-dpo-v3.0
- davidkim205/nox-solar-10.7b-v4
- Deepnoid/deep-solar-Rev-v3.0.4
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# ties_output_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [chihoonlee10/T3Q-ko-solar-dpo-v3.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v3.0) as a base.
### Models Merged
The following models were included in the merge:
* [davidkim205/nox-solar-10.7b-v4](https://huggingface.co/davidkim205/nox-solar-10.7b-v4)
* [Deepnoid/deep-solar-Rev-v3.0.4](https://huggingface.co/Deepnoid/deep-solar-Rev-v3.0.4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: chihoonlee10/T3Q-ko-solar-dpo-v3.0
- model: davidkim205/nox-solar-10.7b-v4
parameters:
density: 0.5
weight: 0.5
- model: Deepnoid/deep-solar-Rev-v3.0.4
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: chihoonlee10/T3Q-ko-solar-dpo-v3.0
parameters:
normalize: true
dtype: float16
``` |
Flann514/whisper-javanese-colab | Flann514 | 2024-03-28T06:10:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T06:10:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0x0son0/nr_112 | 0x0son0 | 2024-03-28T05:59:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T05:06:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manish07092002/gemma-Code-Instruct-Finetune-By-Manish | manish07092002 | 2024-03-28T05:57:38Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T05:51:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lemon-mint/gemma-ko-7b-translate-v0.5 | lemon-mint | 2024-03-28T05:57:34Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ko",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T05:24:10Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
widget:
- messages:
- role: user
content: "Translate into English: 시원한 바닷바람이 여름의 향기를 전해줍니다."
inference:
parameters:
max_new_tokens: 1024
finetuned_from: lemon-mint/gemma-ko-7b-it-v0.33
---
Gemma 7B Instruct Ko-En 번역 모델 실험
```
<start_of_turn>user\nTranslate into English: {korean}<end_of_turn>\n<start_of_turn>model\n{english}<end_of_turn>
<start_of_turn>user\nTranslate into Korean: {english}<end_of_turn>\n<start_of_turn>model\n{korean}<end_of_turn>
<start_of_turn>user\n영어로 번역하세요: {korean}<end_of_turn>\n<start_of_turn>model\n{english}<end_of_turn>
<start_of_turn>user\n한국어로 번역하세요: {english}<end_of_turn>\n<start_of_turn>model\n{korean}<end_of_turn>
``` |
automerger/T3qm7xExperiment28-7B | automerger | 2024-03-28T05:53:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:yam-peleg/Experiment28-7B",
"base_model:finetune:yam-peleg/Experiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T05:52:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- yam-peleg/Experiment28-7B
---
# T3qm7xExperiment28-7B
T3qm7xExperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
## 🧩 Configuration
```yaml
models:
- model: nlpguy/T3QM7X
# No parameters necessary for base model
- model: yam-peleg/Experiment28-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: nlpguy/T3QM7X
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/T3qm7xExperiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CMU-AIR2/code-lora-simple | CMU-AIR2 | 2024-03-28T05:50:02Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-instruct",
"region:us"
]
| null | 2024-03-28T05:08:34Z | ---
library_name: peft
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
weightedhuman/fine-tuned-bert-news-classifier | weightedhuman | 2024-03-28T05:40:03Z | 2 | 3 | tf-keras | [
"tf-keras",
"news",
"text-classification",
"en",
"license:apache-2.0",
"region:us"
]
| text-classification | 2024-03-07T06:27:53Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- news
widget:
- text: "Researchers have made significant progress in the development of a new treatment for a rare genetic disorder. Early trials of the treatment have shown promising results, with patients experiencing improvements in their symptoms and quality of life. This breakthrough offers hope to individuals and families affected by the condition, bringing them closer to a potential cure"
---
# Fine-Tuned BERT News Classifier
## Overview
The Fine-Tuned BERT News Classifier is a natural language processing (NLP) model built upon the BERT architecture. It is specifically designed for news classification, providing a softmax output where a value of 1 indicates positive news and 0 indicates negative news sentiment. This model is trained to understand and categorize news articles, assisting in tasks such as sentiment analysis and news aggregation.
## Usage Instructions
### Import Necessary Libraries
```python
import tensorflow_text as text
import tensorflow as tf
```
### Load The Model
```python
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("weightedhuman/fine-tuned-bert-news-classifier")
```
### Make Predictions
```python
examples = "Community Gardens Flourish, Bringing Fresh Produce and Unity to Neighborhoods"
serving_results = model \
.signatures['serving_default'](tf.constant(examples))
serving_results = tf.sigmoid(serving_results['classifier'])
serving_results_np = serving_results.numpy()
for i in range(len(serving_results_np)):
output_value = serving_results_np[i][0]
print(output_value)
``` |
AzalKhan/gpt2_dpo | AzalKhan | 2024-03-28T05:21:59Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T05:21:36Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harsh13333/ner_bert_model | harsh13333 | 2024-03-28T05:16:40Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:shipping_label_ner",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-24T06:31:42Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
datasets:
- shipping_label_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_bert_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: shipping_label_ner
type: shipping_label_ner
config: shipping_label_ner
split: validation
args: shipping_label_ner
metrics:
- name: Precision
type: precision
value: 0.8192771084337349
- name: Recall
type: recall
value: 0.9066666666666666
- name: F1
type: f1
value: 0.8607594936708859
- name: Accuracy
type: accuracy
value: 0.903954802259887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_bert_model
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the shipping_label_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4675
- Precision: 0.8193
- Recall: 0.9067
- F1: 0.8608
- Accuracy: 0.9040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 1.9567 | 0.0 | 0.0 | 0.0 | 0.4294 |
| No log | 2.0 | 14 | 1.7382 | 1.0 | 0.0133 | 0.0263 | 0.4350 |
| No log | 3.0 | 21 | 1.5156 | 0.56 | 0.1867 | 0.28 | 0.5424 |
| No log | 4.0 | 28 | 1.3070 | 0.5185 | 0.3733 | 0.4341 | 0.6215 |
| No log | 5.0 | 35 | 1.1073 | 0.6792 | 0.48 | 0.5625 | 0.6667 |
| No log | 6.0 | 42 | 0.9590 | 0.6970 | 0.6133 | 0.6525 | 0.7288 |
| No log | 7.0 | 49 | 0.8036 | 0.7324 | 0.6933 | 0.7123 | 0.7853 |
| No log | 8.0 | 56 | 0.7173 | 0.6860 | 0.7867 | 0.7329 | 0.8305 |
| No log | 9.0 | 63 | 0.5963 | 0.7778 | 0.84 | 0.8077 | 0.8814 |
| No log | 10.0 | 70 | 0.5354 | 0.7901 | 0.8533 | 0.8205 | 0.8870 |
| No log | 11.0 | 77 | 0.5048 | 0.8 | 0.8533 | 0.8258 | 0.8814 |
| No log | 12.0 | 84 | 0.4992 | 0.8293 | 0.9067 | 0.8662 | 0.9096 |
| No log | 13.0 | 91 | 0.4745 | 0.8205 | 0.8533 | 0.8366 | 0.8927 |
| No log | 14.0 | 98 | 0.4489 | 0.8608 | 0.9067 | 0.8831 | 0.9153 |
| No log | 15.0 | 105 | 0.4236 | 0.8608 | 0.9067 | 0.8831 | 0.9153 |
| No log | 16.0 | 112 | 0.4621 | 0.8193 | 0.9067 | 0.8608 | 0.9096 |
| No log | 17.0 | 119 | 0.4417 | 0.85 | 0.9067 | 0.8774 | 0.9209 |
| No log | 18.0 | 126 | 0.4642 | 0.8095 | 0.9067 | 0.8553 | 0.9040 |
| No log | 19.0 | 133 | 0.4244 | 0.85 | 0.9067 | 0.8774 | 0.9096 |
| No log | 20.0 | 140 | 0.4731 | 0.8193 | 0.9067 | 0.8608 | 0.9096 |
| No log | 21.0 | 147 | 0.4697 | 0.8193 | 0.9067 | 0.8608 | 0.9040 |
| No log | 22.0 | 154 | 0.4330 | 0.8293 | 0.9067 | 0.8662 | 0.9096 |
| No log | 23.0 | 161 | 0.4531 | 0.8193 | 0.9067 | 0.8608 | 0.9040 |
| No log | 24.0 | 168 | 0.4433 | 0.8193 | 0.9067 | 0.8608 | 0.9040 |
| No log | 25.0 | 175 | 0.4477 | 0.8095 | 0.9067 | 0.8553 | 0.9040 |
| No log | 26.0 | 182 | 0.4446 | 0.8293 | 0.9067 | 0.8662 | 0.9096 |
| No log | 27.0 | 189 | 0.4578 | 0.8293 | 0.9067 | 0.8662 | 0.9096 |
| No log | 28.0 | 196 | 0.4640 | 0.8293 | 0.9067 | 0.8662 | 0.9096 |
| No log | 29.0 | 203 | 0.4683 | 0.8193 | 0.9067 | 0.8608 | 0.9040 |
| No log | 30.0 | 210 | 0.4675 | 0.8193 | 0.9067 | 0.8608 | 0.9040 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LarryAIDraw/sliverwing-10 | LarryAIDraw | 2024-03-28T05:00:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-03-28T04:58:37Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/371035/bronya-zaychik-silverwing |
Aniket1/gemma2b_sci2k | Aniket1 | 2024-03-28T04:58:38Z | 77 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
]
| null | 2024-03-27T04:24:11Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: google/gemma-2b
model-index:
- name: gemma2b_sci2k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b_sci2k
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
bhaskars113/whiskey-recipe-type-model | bhaskars113 | 2024-03-28T04:56:03Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
]
| text-classification | 2024-03-28T04:55:29Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'I made hubby some ginger syrup this afternoon. He loves a whiskey and ginger.
Instead of buying ginger ale in plastic bottles we use a soda stream and #homemade
ginger syrup. #ecofriendly #sustainable #plasticfree pic.twitter.com/kEHHYaSuXr'
- text: Four roses small batch select. Milk chocolate on the nose and palate for me
- text: 'strong. bit old fashioned. always satisfying.. smoking hot is good… wait
are we talking about my old fashion or how I like my men? ?????? Smoky Wakashi
old fashioned ?? #comeonbabylightmyfire #oldfashioned #classic #whiskey #strong
#stiff #neverdisappoints #happiness #smoky #hawaii #hawaiilife'
- text: Pineapple Demerara Old Fashioned by highproofpreacher made with his house
pineapple demerara syrup
- text: Ordered a pink drink & smoked old fashioned & both were delicious & had nice
presentations
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4 | <ul><li>"@vurnt22 Ginger beer and bourbon is one of two times I actually drink anything ginger-y. The other is ginger ale on an airplane because it just seems like you're supposed to."</li><li>"It's not for everyone, but almost everywhere sells Moscow mules. Ginger ale is also good. If there were Ginger wines, Ginger stouts, and Ginger Whiskeys, I'd probably drink those too."</li><li>'I just like the smell of cinnamon. I like the taste too. My favorite candy is cinnamon-flavored, my favorite tea is cinnamon-flavored, my favorite whiskey is cinnamon-flavored.'</li></ul> |
| 0 | <ul><li>"Bourbon Chocolate Ice Cream. It fluffs up beautifully, doesn't melt rapidly during serving and it is one of the best chocolate flavours I have ever had. You know where the recipe is, go and get it."</li><li>'Beautiful Lady.... Now the question is, what did you put in it? I prefer Chocolate Whiskey myself....lol'</li><li>"Bourbon S'mores~Maybe it’s the promise of summer, the nostalgia of a campfire and roasting marshmallows, or the memories of childhood… but S’mores are one of my favorite treats. The way the toasty marshmallow melts the chocolate and the texture of them sandwiched between graham crackers just makes me happy. The Bourbon S’mores Bundt is a grown up version of a childhood favorite. Chocolate graham cracker cake, soaked with bourbon and topped with marshmallow sauce, a fudgy bourbon glaze and toasted marshmallows. One bite and you’ll want “some more."</li></ul> |
| 1 | <ul><li>'That could be it. Helps the smoke stick to the meat and it almost doesn’t matter what you use. I use apple cider vinegar with a little bourbon mixed in. I have zero evidence the bourbon has any effect, it just sounds cool, lol. Try that next time. Just a quick spritz to keep the edges from drying out every hour or so until you wrap it or wherever it’s about 165F.'</li><li>'My brother in law makes the absolute best smoky old fashioned. #whiskey #oldfashioned #smoky #drinks'</li><li>'Smoked to perfection ?? Bridge Street BBQ Platter | House Smoked Beef Brisket, Baked Mac n’ Cheese, Bourbon Baked Beans, Fresh Cornbread and Honey Butter, House B&B Pickles, House Pickled Onion $29 Suggested Drink Pairing: Burnt Orange And Vanilla Old Fashioned. #eatgr #grandrapidsmichigan #grandrapids #happyhour #eatlocal #bridgestreet #beercitywasmissingthebourbon #beercity #westsideisthebestside #grandrapidsmi #whiskey #grnowfood #grnow #supportlocal #grandrapidsblogger #localbusiness #iheartgr'</li></ul> |
| 3 | <ul><li>'A pastry that not only looks like the fruit it’s meant to showcase but also bursts with the fresh flavor of it. In my mind it is a fusion of two classics - a cocktail Whiskey Sour and a Lemon Meringue pie. ▫️Candied lemon and orange peel is suspended in a lemon gel made with freshly squeezed lemon juice and bourbon. ▫️the fruity core is surrounded by white chocolate ganache made with Italian meringue.'</li><li>'They also do some interesting stuff like they have a summer whiskey where it is infused tea and lemon.'</li><li>'Cheers to Peach Whiskey! This peach whiskey from olesmoky goes perfect with BBQ as a refreshing cocktail or on the rocks. I mixed mine with pineapple juice and ginger beer. The perfect refreshing smooth texture, and all the citrus notes of the peach come through. I love drinking Ole Smoky Whiskey, as it’s the best on the market. '</li></ul> |
| 2 | <ul><li>'I soaked walnuts in like 4 shots of bourbon with brown sugar and cinnamon'</li><li>'Figured pecans and bourbon both like a little smoke so decided to smoke my Bourbon Pecan pie recipe for tomorrow. Lick test on the thermometer probe says its delicious. Will find out for sure tomorrow.'</li><li>'I looooove pecan pie. I found a delicious recipe for bourbon pecan pie with homemade bourbon whip cream. I may need to make one soon'</li></ul> |
| 5 | <ul><li>'Just in Nola we have Roulaison, Lula, seven three, wetlands sake, Atelier Vie and happy raptor. A lot of bars have one or two of these available but I rarely see them featured in cocktails. I’d especially love to try flights of local rums or whiskeys alongside common brands so you can see what makes the local stuff unique'</li><li>'I have some Milk Chocolate Truffle right now and that shit is good.'</li><li>'We celebrated our one-year anniversary here and the staff made us feel so loved and celebrated. The butternut bisque for the fall menu was incredible. My whiskey sour was also phenomenal. The room was loud and cold but not uncommon for indoor restaurant.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("bhaskars113/whiskey-recipe-type-model")
# Run inference
preds = model("Four roses small batch select. Milk chocolate on the nose and palate for me")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 50.0446 | 362 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 20 |
| 2 | 20 |
| 3 | 20 |
| 4 | 16 |
| 5 | 16 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0036 | 1 | 0.239 | - |
| 0.1786 | 50 | 0.1855 | - |
| 0.3571 | 100 | 0.0275 | - |
| 0.5357 | 150 | 0.0397 | - |
| 0.7143 | 200 | 0.0063 | - |
| 0.8929 | 250 | 0.0034 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.38.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
adasgaleus/LIM-0.75 | adasgaleus | 2024-03-28T04:51:05Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-28T04:50:41Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 20240327211222_nice_straka
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240327211222_nice_straka
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0208
- Precision: 0.9848
- Recall: 0.9853
- F1: 0.9850
- Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0519 | 0.09 | 300 | 0.0367 | 0.9736 | 0.9691 | 0.9713 | 0.9856 |
| 0.0518 | 0.17 | 600 | 0.0379 | 0.9717 | 0.9709 | 0.9713 | 0.9855 |
| 0.048 | 0.26 | 900 | 0.0357 | 0.9742 | 0.9692 | 0.9717 | 0.9858 |
| 0.0478 | 0.34 | 1200 | 0.0350 | 0.9736 | 0.9724 | 0.9730 | 0.9863 |
| 0.0495 | 0.43 | 1500 | 0.0366 | 0.9734 | 0.9703 | 0.9718 | 0.9856 |
| 0.0457 | 0.51 | 1800 | 0.0344 | 0.9719 | 0.9749 | 0.9734 | 0.9863 |
| 0.0464 | 0.6 | 2100 | 0.0347 | 0.9731 | 0.9717 | 0.9724 | 0.9861 |
| 0.0447 | 0.68 | 2400 | 0.0329 | 0.9743 | 0.9739 | 0.9741 | 0.9868 |
| 0.0435 | 0.77 | 2700 | 0.0332 | 0.9738 | 0.9748 | 0.9743 | 0.9868 |
| 0.0414 | 0.85 | 3000 | 0.0324 | 0.9729 | 0.9771 | 0.9750 | 0.9871 |
| 0.0412 | 0.94 | 3300 | 0.0312 | 0.9759 | 0.9756 | 0.9758 | 0.9875 |
| 0.0352 | 1.02 | 3600 | 0.0312 | 0.9749 | 0.9760 | 0.9754 | 0.9875 |
| 0.0353 | 1.11 | 3900 | 0.0304 | 0.9767 | 0.9759 | 0.9763 | 0.9878 |
| 0.0348 | 1.19 | 4200 | 0.0305 | 0.9765 | 0.9748 | 0.9757 | 0.9877 |
| 0.0362 | 1.28 | 4500 | 0.0313 | 0.9768 | 0.9738 | 0.9753 | 0.9876 |
| 0.0352 | 1.36 | 4800 | 0.0304 | 0.9764 | 0.9771 | 0.9767 | 0.9880 |
| 0.0344 | 1.45 | 5100 | 0.0306 | 0.9778 | 0.9744 | 0.9761 | 0.9880 |
| 0.0337 | 1.54 | 5400 | 0.0288 | 0.9779 | 0.9769 | 0.9774 | 0.9886 |
| 0.0328 | 1.62 | 5700 | 0.0284 | 0.9776 | 0.9777 | 0.9776 | 0.9888 |
| 0.0335 | 1.71 | 6000 | 0.0277 | 0.9783 | 0.9779 | 0.9781 | 0.9887 |
| 0.0329 | 1.79 | 6300 | 0.0284 | 0.9791 | 0.9752 | 0.9772 | 0.9886 |
| 0.0328 | 1.88 | 6600 | 0.0292 | 0.9764 | 0.9773 | 0.9768 | 0.9882 |
| 0.0316 | 1.96 | 6900 | 0.0268 | 0.9785 | 0.9773 | 0.9779 | 0.9890 |
| 0.0264 | 2.05 | 7200 | 0.0272 | 0.9776 | 0.9803 | 0.9789 | 0.9892 |
| 0.0269 | 2.13 | 7500 | 0.0274 | 0.9792 | 0.9782 | 0.9787 | 0.9891 |
| 0.027 | 2.22 | 7800 | 0.0291 | 0.9774 | 0.9782 | 0.9778 | 0.9889 |
| 0.0262 | 2.3 | 8100 | 0.0249 | 0.9809 | 0.9807 | 0.9808 | 0.9902 |
| 0.0258 | 2.39 | 8400 | 0.0255 | 0.9808 | 0.9805 | 0.9806 | 0.9900 |
| 0.0261 | 2.47 | 8700 | 0.0251 | 0.9808 | 0.9800 | 0.9804 | 0.9900 |
| 0.0251 | 2.56 | 9000 | 0.0250 | 0.9814 | 0.9788 | 0.9801 | 0.9901 |
| 0.0248 | 2.64 | 9300 | 0.0248 | 0.9813 | 0.9791 | 0.9802 | 0.9901 |
| 0.0246 | 2.73 | 9600 | 0.0248 | 0.9800 | 0.9817 | 0.9809 | 0.9902 |
| 0.0243 | 2.82 | 9900 | 0.0239 | 0.9793 | 0.9819 | 0.9806 | 0.9900 |
| 0.0241 | 2.9 | 10200 | 0.0236 | 0.9805 | 0.9823 | 0.9814 | 0.9904 |
| 0.0238 | 2.99 | 10500 | 0.0231 | 0.9822 | 0.9799 | 0.9811 | 0.9907 |
| 0.0187 | 3.07 | 10800 | 0.0259 | 0.9782 | 0.9823 | 0.9802 | 0.9901 |
| 0.0188 | 3.16 | 11100 | 0.0231 | 0.9821 | 0.9827 | 0.9824 | 0.9909 |
| 0.0189 | 3.24 | 11400 | 0.0229 | 0.9830 | 0.9802 | 0.9816 | 0.9910 |
| 0.0191 | 3.33 | 11700 | 0.0220 | 0.9815 | 0.9827 | 0.9821 | 0.9910 |
| 0.0187 | 3.41 | 12000 | 0.0223 | 0.9821 | 0.9834 | 0.9828 | 0.9912 |
| 0.018 | 3.5 | 12300 | 0.0224 | 0.9802 | 0.9829 | 0.9815 | 0.9909 |
| 0.0183 | 3.58 | 12600 | 0.0217 | 0.9823 | 0.9831 | 0.9827 | 0.9911 |
| 0.0176 | 3.67 | 12900 | 0.0214 | 0.9840 | 0.9824 | 0.9832 | 0.9916 |
| 0.0177 | 3.75 | 13200 | 0.0211 | 0.9837 | 0.9834 | 0.9835 | 0.9916 |
| 0.0173 | 3.84 | 13500 | 0.0210 | 0.9828 | 0.9840 | 0.9834 | 0.9916 |
| 0.017 | 3.92 | 13800 | 0.0207 | 0.9832 | 0.9839 | 0.9836 | 0.9916 |
| 0.0141 | 4.01 | 14100 | 0.0213 | 0.9844 | 0.9838 | 0.9841 | 0.9919 |
| 0.0129 | 4.09 | 14400 | 0.0213 | 0.9837 | 0.9849 | 0.9843 | 0.9919 |
| 0.013 | 4.18 | 14700 | 0.0228 | 0.9831 | 0.9834 | 0.9833 | 0.9915 |
| 0.0128 | 4.27 | 15000 | 0.0210 | 0.9844 | 0.9846 | 0.9845 | 0.9920 |
| 0.0126 | 4.35 | 15300 | 0.0212 | 0.9843 | 0.9842 | 0.9842 | 0.9920 |
| 0.0125 | 4.44 | 15600 | 0.0214 | 0.9845 | 0.9844 | 0.9844 | 0.9920 |
| 0.0121 | 4.52 | 15900 | 0.0217 | 0.9844 | 0.9846 | 0.9845 | 0.9921 |
| 0.012 | 4.61 | 16200 | 0.0211 | 0.9847 | 0.9848 | 0.9847 | 0.9922 |
| 0.0119 | 4.69 | 16500 | 0.0209 | 0.9845 | 0.9852 | 0.9848 | 0.9922 |
| 0.0116 | 4.78 | 16800 | 0.0211 | 0.9845 | 0.9847 | 0.9846 | 0.9922 |
| 0.0115 | 4.86 | 17100 | 0.0210 | 0.9850 | 0.9844 | 0.9847 | 0.9923 |
| 0.0115 | 4.95 | 17400 | 0.0208 | 0.9848 | 0.9853 | 0.9850 | 0.9923 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.0a0+6a974be
- Datasets 2.18.0
- Tokenizers 0.15.2
|
IslamMesabah/CoderAPI | IslamMesabah | 2024-03-28T04:49:00Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"codet5p",
"text2text-generation",
"code",
"API",
"custom_code",
"en",
"dataset:IslamMesabah/CoderAPI_Dataset",
"license:mit",
"autotrain_compatible",
"region:us"
]
| text2text-generation | 2024-03-04T11:15:58Z | ---
license: mit
datasets:
- IslamMesabah/CoderAPI_Dataset
language:
- en
metrics:
- bleu
- code_eval
tags:
- code
- API
---
### Large Language Models for instructed and effective code generation using Documentation of APIs
This thesis explores the effective utilization of Large Language Models, specifically the Instruct CodeT5+ 16 Billion model, for the generation of multi-line, ready-to-execute code in Python. Departing from conventional reliance solely on pre-trained LLM knowledge, we employ API documentation to enhance the correctness of generated code for both seen and unseen APIs in the LLM knowledge. We utilize the Retrieval-Augmented Generation technique to incorporate user intents expressed in English, specifically targeting APIs, to select the most suitable segments from the relevant API documentation. Subsequently, these user intents and API documentation segments are utilized in model prompt engineering and fine-tuning procedures. We collect a newly synthesized dataset comprising 938 data points encompassing 46 distinct APIs. Furthermore, we demonstrate significant advancements in code generation accuracy and utility, resulting in a remarkable 0.2 increase in ICE score and a 0.33% elevation in CodeBLEU. Our experimental evaluation provides valuable insights into code generation complexities, including the impact of seen and unseen API documentation on model performance and the effectiveness of prompt engineering strategies. This work underscores the importance of leveraging natural language processing techniques to address real-world challenges in software engineering, with implications for automated software development and enhanced developer productivity. |
Tegomo/Mistral_semtab | Tegomo | 2024-03-28T04:42:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T04:42:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Herry443/Mistral-7B-KNUT-ref-en-mmlu-0.6-final | Herry443 | 2024-03-28T04:37:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T04:08:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shidowake/240328-Swal-MS-7b-CVec | shidowake | 2024-03-28T04:36:43Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T03:15:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hemg/small-deepfake | Hemg | 2024-03-28T04:28:29Z | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-03-28T03:24:16Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: small-deepfake
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-deepfake
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7489
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 1 | 0.6871 | 0.5 |
| No log | 1.6 | 2 | 0.7041 | 0.5 |
| No log | 2.4 | 3 | 0.7126 | 0.5 |
| No log | 3.2 | 4 | 0.7489 | 0.5 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
moondriller/solar10B-eugeneparkthebestv2 | moondriller | 2024-03-28T04:24:42Z | 2,249 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T04:06:51Z | ---
language:
- ko
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gonzalezrostani/my_awesome_wnut_GAneither | gonzalezrostani | 2024-03-28T04:24:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-22T13:56:13Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_GAneither
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_GAneither
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4347
- Precision: 0.6390
- Recall: 0.6725
- F1: 0.6553
- Accuracy: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 46 | 0.1668 | 0.4885 | 0.5546 | 0.5194 | 0.9340 |
| No log | 2.0 | 92 | 0.1436 | 0.6 | 0.6288 | 0.6141 | 0.9465 |
| No log | 3.0 | 138 | 0.1509 | 0.5980 | 0.5197 | 0.5561 | 0.9446 |
| No log | 4.0 | 184 | 0.1417 | 0.6109 | 0.6376 | 0.6239 | 0.9489 |
| No log | 5.0 | 230 | 0.1714 | 0.6123 | 0.6070 | 0.6096 | 0.9450 |
| No log | 6.0 | 276 | 0.1965 | 0.6138 | 0.6594 | 0.6358 | 0.9483 |
| No log | 7.0 | 322 | 0.2061 | 0.6157 | 0.6856 | 0.6488 | 0.9486 |
| No log | 8.0 | 368 | 0.2316 | 0.6387 | 0.6638 | 0.6510 | 0.9495 |
| No log | 9.0 | 414 | 0.2577 | 0.6118 | 0.6332 | 0.6223 | 0.9468 |
| No log | 10.0 | 460 | 0.2604 | 0.6137 | 0.6245 | 0.6190 | 0.9453 |
| 0.0757 | 11.0 | 506 | 0.3047 | 0.6580 | 0.5546 | 0.6019 | 0.9474 |
| 0.0757 | 12.0 | 552 | 0.2688 | 0.5962 | 0.6769 | 0.6339 | 0.9462 |
| 0.0757 | 13.0 | 598 | 0.2777 | 0.6371 | 0.6594 | 0.6481 | 0.9498 |
| 0.0757 | 14.0 | 644 | 0.2916 | 0.6697 | 0.6376 | 0.6532 | 0.9513 |
| 0.0757 | 15.0 | 690 | 0.3116 | 0.6635 | 0.6114 | 0.6364 | 0.9507 |
| 0.0757 | 16.0 | 736 | 0.2955 | 0.6509 | 0.6594 | 0.6551 | 0.9495 |
| 0.0757 | 17.0 | 782 | 0.3101 | 0.6481 | 0.6594 | 0.6537 | 0.9507 |
| 0.0757 | 18.0 | 828 | 0.3259 | 0.6296 | 0.6681 | 0.6483 | 0.9483 |
| 0.0757 | 19.0 | 874 | 0.3509 | 0.6411 | 0.5852 | 0.6119 | 0.9474 |
| 0.0757 | 20.0 | 920 | 0.3201 | 0.6129 | 0.6638 | 0.6373 | 0.9471 |
| 0.0757 | 21.0 | 966 | 0.3251 | 0.6282 | 0.6419 | 0.6350 | 0.9483 |
| 0.0062 | 22.0 | 1012 | 0.3414 | 0.6432 | 0.6376 | 0.6404 | 0.9489 |
| 0.0062 | 23.0 | 1058 | 0.3261 | 0.6187 | 0.6943 | 0.6543 | 0.9489 |
| 0.0062 | 24.0 | 1104 | 0.3364 | 0.6115 | 0.6943 | 0.6503 | 0.9480 |
| 0.0062 | 25.0 | 1150 | 0.3460 | 0.6387 | 0.6638 | 0.6510 | 0.9513 |
| 0.0062 | 26.0 | 1196 | 0.3616 | 0.6327 | 0.6769 | 0.6540 | 0.9498 |
| 0.0062 | 27.0 | 1242 | 0.3651 | 0.6277 | 0.6332 | 0.6304 | 0.9498 |
| 0.0062 | 28.0 | 1288 | 0.3759 | 0.6452 | 0.6114 | 0.6278 | 0.9480 |
| 0.0062 | 29.0 | 1334 | 0.3972 | 0.6140 | 0.5764 | 0.5946 | 0.9468 |
| 0.0062 | 30.0 | 1380 | 0.3798 | 0.6337 | 0.6725 | 0.6525 | 0.9483 |
| 0.0062 | 31.0 | 1426 | 0.3731 | 0.6468 | 0.6638 | 0.6552 | 0.9501 |
| 0.0062 | 32.0 | 1472 | 0.3828 | 0.6458 | 0.6769 | 0.6610 | 0.9510 |
| 0.002 | 33.0 | 1518 | 0.3907 | 0.6202 | 0.6987 | 0.6571 | 0.9477 |
| 0.002 | 34.0 | 1564 | 0.3871 | 0.6352 | 0.6769 | 0.6554 | 0.9483 |
| 0.002 | 35.0 | 1610 | 0.3859 | 0.6220 | 0.6681 | 0.6442 | 0.9468 |
| 0.002 | 36.0 | 1656 | 0.4158 | 0.6385 | 0.5939 | 0.6154 | 0.9477 |
| 0.002 | 37.0 | 1702 | 0.4385 | 0.6244 | 0.5590 | 0.5899 | 0.9456 |
| 0.002 | 38.0 | 1748 | 0.3967 | 0.6303 | 0.6550 | 0.6424 | 0.9486 |
| 0.002 | 39.0 | 1794 | 0.4052 | 0.6481 | 0.6594 | 0.6537 | 0.9507 |
| 0.002 | 40.0 | 1840 | 0.4050 | 0.6533 | 0.6419 | 0.6476 | 0.9501 |
| 0.002 | 41.0 | 1886 | 0.4096 | 0.6376 | 0.6376 | 0.6376 | 0.9486 |
| 0.002 | 42.0 | 1932 | 0.3997 | 0.6364 | 0.7031 | 0.6680 | 0.9492 |
| 0.002 | 43.0 | 1978 | 0.3992 | 0.6157 | 0.6856 | 0.6488 | 0.9465 |
| 0.0013 | 44.0 | 2024 | 0.3896 | 0.625 | 0.6769 | 0.6499 | 0.9483 |
| 0.0013 | 45.0 | 2070 | 0.3938 | 0.6300 | 0.6245 | 0.6272 | 0.9489 |
| 0.0013 | 46.0 | 2116 | 0.4150 | 0.6406 | 0.6070 | 0.6233 | 0.9495 |
| 0.0013 | 47.0 | 2162 | 0.3988 | 0.6387 | 0.6638 | 0.6510 | 0.9504 |
| 0.0013 | 48.0 | 2208 | 0.3993 | 0.6375 | 0.6681 | 0.6525 | 0.9501 |
| 0.0013 | 49.0 | 2254 | 0.4058 | 0.6138 | 0.6594 | 0.6358 | 0.9477 |
| 0.0013 | 50.0 | 2300 | 0.4048 | 0.6322 | 0.6681 | 0.6497 | 0.9498 |
| 0.0013 | 51.0 | 2346 | 0.4029 | 0.6318 | 0.6594 | 0.6453 | 0.9492 |
| 0.0013 | 52.0 | 2392 | 0.4081 | 0.6398 | 0.6594 | 0.6495 | 0.9501 |
| 0.0013 | 53.0 | 2438 | 0.4143 | 0.6383 | 0.6550 | 0.6466 | 0.9495 |
| 0.0013 | 54.0 | 2484 | 0.4056 | 0.6136 | 0.7074 | 0.6572 | 0.9468 |
| 0.0012 | 55.0 | 2530 | 0.4059 | 0.6382 | 0.6856 | 0.6611 | 0.9489 |
| 0.0012 | 56.0 | 2576 | 0.4117 | 0.6393 | 0.6812 | 0.6596 | 0.9498 |
| 0.0012 | 57.0 | 2622 | 0.4238 | 0.6292 | 0.6594 | 0.6439 | 0.9489 |
| 0.0012 | 58.0 | 2668 | 0.4202 | 0.6220 | 0.6681 | 0.6442 | 0.9483 |
| 0.0012 | 59.0 | 2714 | 0.4178 | 0.6466 | 0.6550 | 0.6508 | 0.9507 |
| 0.0012 | 60.0 | 2760 | 0.4098 | 0.6245 | 0.6900 | 0.6556 | 0.9483 |
| 0.0012 | 61.0 | 2806 | 0.4107 | 0.6537 | 0.6594 | 0.6565 | 0.9513 |
| 0.0012 | 62.0 | 2852 | 0.4085 | 0.6498 | 0.6725 | 0.6609 | 0.9516 |
| 0.0012 | 63.0 | 2898 | 0.4116 | 0.6337 | 0.6725 | 0.6525 | 0.9501 |
| 0.0012 | 64.0 | 2944 | 0.4124 | 0.6305 | 0.6856 | 0.6569 | 0.9492 |
| 0.0012 | 65.0 | 2990 | 0.4166 | 0.6533 | 0.6419 | 0.6476 | 0.9516 |
| 0.0009 | 66.0 | 3036 | 0.4081 | 0.6270 | 0.6681 | 0.6469 | 0.9507 |
| 0.0009 | 67.0 | 3082 | 0.4050 | 0.6417 | 0.6725 | 0.6567 | 0.9510 |
| 0.0009 | 68.0 | 3128 | 0.4057 | 0.6488 | 0.6856 | 0.6667 | 0.9523 |
| 0.0009 | 69.0 | 3174 | 0.4080 | 0.6583 | 0.6900 | 0.6738 | 0.9535 |
| 0.0009 | 70.0 | 3220 | 0.4114 | 0.6569 | 0.6856 | 0.6709 | 0.9532 |
| 0.0009 | 71.0 | 3266 | 0.4232 | 0.6579 | 0.6550 | 0.6565 | 0.9519 |
| 0.0009 | 72.0 | 3312 | 0.4120 | 0.6466 | 0.7031 | 0.6736 | 0.9516 |
| 0.0009 | 73.0 | 3358 | 0.4259 | 0.6594 | 0.6594 | 0.6594 | 0.9519 |
| 0.0009 | 74.0 | 3404 | 0.4172 | 0.6475 | 0.6900 | 0.6681 | 0.9513 |
| 0.0009 | 75.0 | 3450 | 0.4175 | 0.6434 | 0.6856 | 0.6638 | 0.9510 |
| 0.0009 | 76.0 | 3496 | 0.4255 | 0.6522 | 0.6550 | 0.6536 | 0.9510 |
| 0.0008 | 77.0 | 3542 | 0.4255 | 0.6420 | 0.6812 | 0.6610 | 0.9498 |
| 0.0008 | 78.0 | 3588 | 0.4245 | 0.6429 | 0.6681 | 0.6552 | 0.9495 |
| 0.0008 | 79.0 | 3634 | 0.4173 | 0.6449 | 0.6900 | 0.6667 | 0.9507 |
| 0.0008 | 80.0 | 3680 | 0.4227 | 0.6494 | 0.6550 | 0.6522 | 0.9510 |
| 0.0008 | 81.0 | 3726 | 0.4210 | 0.6527 | 0.6812 | 0.6667 | 0.9513 |
| 0.0008 | 82.0 | 3772 | 0.4234 | 0.6437 | 0.6943 | 0.6681 | 0.9504 |
| 0.0008 | 83.0 | 3818 | 0.4341 | 0.6483 | 0.6681 | 0.6581 | 0.9507 |
| 0.0008 | 84.0 | 3864 | 0.4355 | 0.6483 | 0.6681 | 0.6581 | 0.9507 |
| 0.0008 | 85.0 | 3910 | 0.4309 | 0.6393 | 0.6812 | 0.6596 | 0.9501 |
| 0.0008 | 86.0 | 3956 | 0.4365 | 0.6524 | 0.6638 | 0.6580 | 0.9513 |
| 0.0007 | 87.0 | 4002 | 0.4394 | 0.6594 | 0.6594 | 0.6594 | 0.9516 |
| 0.0007 | 88.0 | 4048 | 0.4357 | 0.6552 | 0.6638 | 0.6594 | 0.9513 |
| 0.0007 | 89.0 | 4094 | 0.4337 | 0.6471 | 0.6725 | 0.6595 | 0.9507 |
| 0.0007 | 90.0 | 4140 | 0.4350 | 0.6471 | 0.6725 | 0.6595 | 0.9507 |
| 0.0007 | 91.0 | 4186 | 0.4372 | 0.6525 | 0.6725 | 0.6624 | 0.9516 |
| 0.0007 | 92.0 | 4232 | 0.4340 | 0.6337 | 0.6725 | 0.6525 | 0.9489 |
| 0.0007 | 93.0 | 4278 | 0.4330 | 0.6341 | 0.6812 | 0.6568 | 0.9495 |
| 0.0007 | 94.0 | 4324 | 0.4348 | 0.6417 | 0.6725 | 0.6567 | 0.9498 |
| 0.0007 | 95.0 | 4370 | 0.4347 | 0.6402 | 0.6681 | 0.6538 | 0.9495 |
| 0.0007 | 96.0 | 4416 | 0.4351 | 0.6402 | 0.6681 | 0.6538 | 0.9495 |
| 0.0007 | 97.0 | 4462 | 0.4347 | 0.6390 | 0.6725 | 0.6553 | 0.9492 |
| 0.0006 | 98.0 | 4508 | 0.4346 | 0.6390 | 0.6725 | 0.6553 | 0.9495 |
| 0.0006 | 99.0 | 4554 | 0.4348 | 0.6390 | 0.6725 | 0.6553 | 0.9492 |
| 0.0006 | 100.0 | 4600 | 0.4347 | 0.6390 | 0.6725 | 0.6553 | 0.9492 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rezabarati/Covid-QA | rezabarati | 2024-03-28T04:20:26Z | 67 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-03-27T06:14:51Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: rezabarati/Covid-QA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rezabarati/Covid-QA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5278
- Validation Loss: 2.5554
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 402, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.7691 | 2.8166 | 0 |
| 2.5278 | 2.5554 | 1 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ntvcie/Gemma7bVinhntV6 | ntvcie | 2024-03-28T04:09:30Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T04:09:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
imagepipeline/Juggernaut-XL-V9-RunDiffusion-Photo-2 | imagepipeline | 2024-03-28T04:06:52Z | 52 | 0 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-03-28T04:03:49Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Juggernaut-XL-V9-RunDiffusion-Photo-2
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7292cc1b-b504-4a42-94a4-9d6f498da995/width=450/00014-Leica%20Hasselblad%20portrait,%20hyperdetailed%20Photography,%20a%20Native%20American%20man%20walks%20proudly%20confidently%20in%20traditional%20clothing%20wi.jpeg" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - Recommended Settings for the Normal Version: Res: 832*1216 Sampler: DPM++ 2M Karras Steps: 30-40 CFG: 3-7 (less is a bit more realistic) Negative: Start with no negative, and add afterwards the Stuff you don´t wanna see in that image. VAE is already Baked In HiRes: 4xNMKD-Siax_200k with 15 Steps and 0.3 Denoise + 1.5 Upscale And a few keywords/tokens that I regularly use in training, which might help you achieve the optimal result from the version: Architecture Photography Wildlife Photography Car Photography Food Photography Interior Photography Landscape Photography Hyperdetailed Photography Cinematic Movie Still Mid Shot Photo Full Body Photo Skin Details
[](https://imagepipeline.io/models/Juggernaut-XL-V9-RunDiffusion-Photo-2?id=bcf3f995-ef2f-4a4c-aed9-cd7019698a81/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "bcf3f995-ef2f-4a4c-aed9-cd7019698a81",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
AsphyXIA/baarat-kannada-instruct-0.1 | AsphyXIA | 2024-03-28T03:58:38Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-08T19:23:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: baarat-kannada-instruct
---
# Uploaded model
- **Developed by:** AsphyXIA
- **License:** apache-2.0
- **Finetuned from model :** baarat-kannada-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
l3utterfly/mistral-7b-v0.1-layla-v4-chatml | l3utterfly | 2024-03-28T03:57:43Z | 62 | 14 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"autotrain_compatible",
"endpoints_compatible",
"chatml",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-12T07:03:00Z | ---
license: apache-2.0
tags:
- finetuned
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
library_name: transformers
language:
- en
model_creator: l3utterfly
model_name: mistral-7b-v0.1-layla-v4-chatml
model_type: mistral
pipeline_tag: text-generation
---
# Model Card

(image by https://huggingface.co/Kronikus)
### Model Description
Mistral 7B fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
1. remove all refusals
2. remove any mention of AI assistant
3. split any multi-turn dialog generated in the dataset into multi-turn conversations records
4. added nfsw generated conversations from the Teatime dataset
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Mistral 7B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
<|im_start|>system
You are Chiharu Yamada. Embody the character and personality completely.
Chiharu is a young, computer engineer-nerd with a knack for problem solving and a passion for technology.<|im_end|>
<|im_start|>Chiharu
*Chiharu strides into the room with a smile, her eyes lighting up when she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She takes a seat next to you, her enthusiasm palpable in the air*
Hey! I'm so excited to finally meet you. I've heard so many great things about you and I'm eager to pick your brain about computers. I'm sure you have a wealth of knowledge that I can learn from. *She grins, eyes twinkling with excitement* Let's get started!<|im_end|>
<|im_start|>user
Sure! What do you want to know about?<|im_end|>
<|im_start|>Chiharu
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Model Quants
[solidrust/-AWQ](https://huggingface.co/solidrust/Layla-7B-v4-AWQ)
|
scoliono/groupchat_lora | scoliono | 2024-03-28T03:53:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T03:52:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** scoliono
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kmpartner/dummy-model-glue-mrpc | kmpartner | 2024-03-28T03:51:59Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-28T03:50:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DiegoT200/dqn-SpaceInvadersNoFrameskip-v4 | DiegoT200 | 2024-03-28T03:47:49Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T03:47:18Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 544.50 +/- 76.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DiegoT200 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DiegoT200 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DiegoT200
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Mohamed6900/MergedModel | Mohamed6900 | 2024-03-28T03:46:03Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"HuggingFaceH4/zephyr-7b-beta",
"mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-28T03:46:02Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
---
# MergedModel
MergedModel is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
base_model: HuggingFaceH4/zephyr-7b-beta
dtype: float16
gate_mode: cheap_embed
experts:
- source_model: HuggingFaceH4/zephyr-7b-beta
positive_prompts: ["You are an helpful general-pupose assistant."]
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts: ["You are a helpful assistant."]
``` |
LR-AI-Labs/tiny-universal-NER | LR-AI-Labs | 2024-03-28T03:43:04Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Universal-NER/Pile-NER-type",
"arxiv:2308.03279",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T02:20:33Z | ---
license: apache-2.0
datasets:
- Universal-NER/Pile-NER-type
language:
- en
---
<div align="center">
# tiny-universal-NER
</div>
This model is finetuned from [TinyLLama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
It is trained on ChatGPT-generated [Pile-NER-type data](https://huggingface.co/datasets/Universal-NER/Pile-NER-type).
Check this [paper](https://arxiv.org/abs/2308.03279) for more information.
### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="LR-AI-Labs/tiny-universal-NER",
torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{
"role": "system",
"content": "A virtual assistant answers questions from a user based on the provided text.",
},
{
"role": "user",
"content": "Text: VinBigData Joint Stock Company provides platform technology solutions and advanced products based on Big Data and Artificial Intelligence. With a staff of professors, doctors, and global technology experts, VinBigData is currently developing and deploying products such as ViVi virtual assistant, VinBase the comprehensive multi-cognitive artificial intelligence ecosystem, Vizone the ecosystem of smart image analysis solutions, VinDr the medical image digitization platform,..."
},
{
"role": "assistant",
"content": "I've read this text."
},
{
"role": "user",
"content": "What describes products in the text?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
# <|system|>
# A virtual assistant answers questions from a user based on the provided text.</s>
# <|user|>
# Text: The American Bank Note Company Printing Plant is a repurposed complex of three interconnected buildings in the Hunts Point neighborhood of the Bronx in New York City. The innovative Kirby, Petit & Green design was built in 1909–1911 by the American Bank Note Company on land which had previously been part of Edward G. Faile's country estate. A wide variety of financial instruments were printed there; at one point, over five million documents were produced per day, including half the securities traded on the New York Stock Exchange.</s>
# <|assistant|>
# I've read this text.</s>
# <|user|>
# What describes location in the text?</s>
# <|assistant|>
# ["ViVi", "VinBase", "Vizone", "VinDr"]
```
### Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
## License
This model and its associated data are released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. They are primarily used for research purposes. |
tsavage68/MPT_1000_STEPS_1e7_rate_05_beta_DPO | tsavage68 | 2024-03-28T03:41:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T01:15:53Z | ---
license: apache-2.0
base_model: mosaicml/mpt-7b-instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MPT_1000_STEPS_1e7_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MPT_1000_STEPS_1e7_rate_03_beta_DPO
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Rewards/chosen: -0.0230
- Rewards/rejected: -0.0291
- Rewards/accuracies: 0.5275
- Rewards/margins: 0.0061
- Logps/rejected: -21.6156
- Logps/chosen: -20.8382
- Logits/rejected: 14.2213
- Logits/chosen: 14.2239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6958 | 0.05 | 50 | 0.6969 | -0.0103 | -0.0064 | 0.4791 | -0.0040 | -21.5702 | -20.8128 | 14.2683 | 14.2709 |
| 0.6948 | 0.1 | 100 | 0.6966 | -0.0023 | 0.0014 | 0.5077 | -0.0037 | -21.5546 | -20.7968 | 14.2571 | 14.2597 |
| 0.6971 | 0.15 | 150 | 0.7007 | -0.0051 | 0.0067 | 0.4681 | -0.0117 | -21.5441 | -20.8024 | 14.2475 | 14.2501 |
| 0.6891 | 0.2 | 200 | 0.6943 | 0.0187 | 0.0174 | 0.4923 | 0.0013 | -21.5227 | -20.7548 | 14.2452 | 14.2478 |
| 0.6906 | 0.24 | 250 | 0.6922 | 0.0036 | -0.0018 | 0.4747 | 0.0054 | -21.5609 | -20.7850 | 14.2395 | 14.2421 |
| 0.6865 | 0.29 | 300 | 0.6942 | 0.0038 | 0.0023 | 0.4857 | 0.0015 | -21.5528 | -20.7845 | 14.2393 | 14.2419 |
| 0.7058 | 0.34 | 350 | 0.6939 | -0.0025 | -0.0045 | 0.5055 | 0.0020 | -21.5664 | -20.7971 | 14.2533 | 14.2559 |
| 0.6817 | 0.39 | 400 | 0.6918 | -0.0255 | -0.0318 | 0.5143 | 0.0063 | -21.6210 | -20.8431 | 14.2343 | 14.2369 |
| 0.6726 | 0.44 | 450 | 0.6902 | -0.0203 | -0.0301 | 0.5582 | 0.0099 | -21.6177 | -20.8327 | 14.2287 | 14.2313 |
| 0.6927 | 0.49 | 500 | 0.6903 | -0.0159 | -0.0254 | 0.5209 | 0.0096 | -21.6083 | -20.8239 | 14.2329 | 14.2355 |
| 0.6728 | 0.54 | 550 | 0.6905 | -0.0252 | -0.0342 | 0.5297 | 0.0089 | -21.6258 | -20.8426 | 14.2305 | 14.2331 |
| 0.6733 | 0.59 | 600 | 0.6877 | -0.0158 | -0.0305 | 0.5341 | 0.0147 | -21.6184 | -20.8237 | 14.2330 | 14.2356 |
| 0.6937 | 0.64 | 650 | 0.6916 | -0.0222 | -0.0293 | 0.5341 | 0.0071 | -21.6161 | -20.8365 | 14.2242 | 14.2268 |
| 0.6771 | 0.68 | 700 | 0.6921 | -0.0234 | -0.0294 | 0.5231 | 0.0060 | -21.6163 | -20.8391 | 14.2289 | 14.2315 |
| 0.6874 | 0.73 | 750 | 0.6916 | -0.0219 | -0.0286 | 0.5121 | 0.0067 | -21.6147 | -20.8361 | 14.2292 | 14.2317 |
| 0.6772 | 0.78 | 800 | 0.6888 | -0.0187 | -0.0313 | 0.5473 | 0.0127 | -21.6201 | -20.8295 | 14.2308 | 14.2334 |
| 0.7033 | 0.83 | 850 | 0.6886 | -0.0163 | -0.0294 | 0.5297 | 0.0131 | -21.6163 | -20.8248 | 14.2220 | 14.2245 |
| 0.6772 | 0.88 | 900 | 0.6894 | -0.0217 | -0.0330 | 0.5297 | 0.0113 | -21.6235 | -20.8357 | 14.2227 | 14.2253 |
| 0.696 | 0.93 | 950 | 0.6918 | -0.0229 | -0.0293 | 0.5275 | 0.0064 | -21.6160 | -20.8380 | 14.2213 | 14.2239 |
| 0.6881 | 0.98 | 1000 | 0.6919 | -0.0230 | -0.0291 | 0.5275 | 0.0061 | -21.6156 | -20.8382 | 14.2213 | 14.2239 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
stablediffusionapi/miasmav3 | stablediffusionapi | 2024-03-28T03:41:38Z | 19 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-28T03:38:37Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# miasma_v3 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "miasmav3"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/miasmav3)
Model link: [View model](https://modelslab.com/models/miasmav3)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "miasmav3",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
pskannan/Agaram-2.5-Mistral-7B | pskannan | 2024-03-28T03:41:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T03:36:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andytrann/dreambooth_car | andytrann | 2024-03-28T03:40:47Z | 2 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-03-28T02:51:43Z |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of oue glc300
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - andytrann/dreambooth_car
<Gallery />
## Model description
These are andytrann/dreambooth_car LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use photo of oue glc300 to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](andytrann/dreambooth_car/tree/main) them in the Files & versions tab.
|
N0de/Reinforce-Pixelcopter-PLE-v0 | N0de | 2024-03-28T03:33:23Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T03:33:17Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 55.70 +/- 44.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rararasputin/ppo-LunarLander-v2 | rararasputin | 2024-03-28T03:31:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T03:31:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.48 +/- 22.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Herry443/Mistral-7B-KNUT-ref-en-mmlu-0.5-final | Herry443 | 2024-03-28T03:30:59Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T03:08:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuiseki/tinyllama-it-wikipedia-1.5T-v0.1 | yuiseki | 2024-03-28T03:26:43Z | 70 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T03:24:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wenge-research/yayi2-30b | wenge-research | 2024-03-28T03:24:09Z | 43 | 75 | transformers | [
"transformers",
"pytorch",
"yayi",
"text-generation",
"custom_code",
"arxiv:2312.14862",
"arxiv:2307.09288",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-12-12T08:40:29Z | ---
license: other
---
<div align="center">
<h1>
YAYI 2
</h1>
<!-- <br> -->
</div>
<div align="center">
<a href="https://github.com/wenge-research/YAYI2" target="_blank">GitHub</a> | <a href="https://yayi.wenge.com" target="_blank">雅意大模型</a>
</div>
## 介绍/Introduction
YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。
本次开源的模型为 YAYI2-30B Base 模型。如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2) 仓库。更多技术细节,欢迎阅读我们的技术报告🔥[YAYI 2: Multilingual Open-Source Large Language Models](https://arxiv.org/abs/2312.14862)。
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF).
We opensource the pre-trained language model in this release, namely **YAYI2-30B**. For more details about the YAYI 2, please refer to our [GitHub](https://github.com/wenge-research/YAYI2) repository. For more technical details, please read our technical report 🔥YAYI 2: Multilingual Open-Source Large Language Models.
## 模型细节/Model Details
| Hyperparameter| Value |
|:----------|:----------:|
| n_layers | 64 |
| n_heads | 64 |
| hidden_size | 7168 |
| vocab_size | 81920 |
| sequence length | 4096 |
## 要求/Requirements
* python 3.8及以上版本
* pytorch 2.0.1 及以上版本
* 建议使用 CUDA 11.7 及以上版本
* 运行 BF16 或 FP16 模型需要至少80GB显存(例如1xA100)
* python 3.8 and above
* pytorch 2.0.1 and above
* CUDA 11.7 and above are recommended
* To run YAYI2-30B in bf16/fp16, at least 80GB GPU memory is required (e.g., 1xA100-80GB)
## 快速开始/Quick Start
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("wenge-research/yayi2-30b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("wenge-research/yayi2-30b", device_map="auto", trust_remote_code=True)
>>> inputs = tokenizer('The winter in Beijing is', return_tensors='pt')
>>> inputs = inputs.to('cuda')
>>> pred = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
repetition_penalty=1.2,
temperature=0.4,
top_k=100,
top_p=0.8
)
>>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## 评测结果/Evaluation
我们在多个基准数据集上进行了评测,包括 C-Eval、MMLU、 CMMLU、AGIEval、GAOKAO-Bench、GSM8K、MATH、BBH、HumanEval 以及 MBPP。我们考察了模型在语言理解、学科知识、数学推理、逻辑推理以及代码生成方面的表现。YAYI 2 模型在与其规模相近的开源模型中展现出了显著的性能提升。
We evaluate our model on standard benchmarks, including C-Eval, MMLU, CMMLU, AGIEval, GAOKAO-Bench, GSM8K, MATH, BBH, HumanEval, and MBPP. Our goal is to assess the model's performance in language comprehension, knowledge comprehension, mathematical reasoning, logical reasoning, and code generation. YAYI 2 has demonstrated exceptional performance across models with similar size.
<table id="myTable">
<!-- Table header -->
<tr>
<th></th>
<th colspan="5" style="text-align: center;">Knowledge</th>
<th colspan="2" style="text-align: center;">Math</th>
<th colspan="1" style="text-align: center;">Logic reasonning</th>
<th colspan="2" style="text-align: center;">Code</th>
</tr>
<tr>
<th style="text-align: left;">Model</th>
<th>C-Eval(val)</th>
<th>MMLU</th>
<th>AGIEval</th>
<th>CMMLU</th>
<th>GAOKAO-Bench</th>
<th>GSM8K</th>
<th>MATH</th>
<th>BBH</th>
<th>HumanEval</th>
<th>MBPP</th>
</tr>
<tr>
<td></td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">3/0-shot</td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">0-shot</td>
<td style="text-align: center;">8/4-shot</td>
<td style="text-align: center;">4-shot</td>
<td style="text-align: center;">3-shot</td>
<td style="text-align: center;">0-shot</td>
<td style="text-align: center;">3-shot</td>
</tr>
<tr>
<td><strong>MPT-30B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">46.9</td>
<td style="text-align: center;">33.8</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">15.2</td>
<td style="text-align: center;">3.1</td>
<td style="text-align: center;">38.0</td>
<td style="text-align: center;">25.0</td>
<td style="text-align: center;">32.8</td>
</tr>
<tr>
<td><strong>Falcon-40B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">55.4</td>
<td style="text-align: center;">37.0</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">19.6</td>
<td style="text-align: center;">5.5</td>
<td style="text-align: center;">37.1</td>
<td style="text-align: center;">0.6</td>
<td style="text-align: center;">29.8</td>
</tr>
<tr>
<td><strong>LLaMA2-34B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">62.6</td>
<td style="text-align: center;">43.4</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">42.2</td>
<td style="text-align: center;">6.2</td>
<td style="text-align: center;">44.1</td>
<td style="text-align: center;">22.6</td>
<td style="text-align: center;">33.0</td>
</tr>
<tr>
<td><strong>Baichuan2-13B</strong></td>
<td style="text-align: center;">59.0</td>
<td style="text-align: center;">59.5</td>
<td style="text-align: center;">37.4</td>
<td style="text-align: center;">61.3</td>
<td style="text-align: center;">45.6</td>
<td style="text-align: center;">52.6</td>
<td style="text-align: center;">10.1</td>
<td style="text-align: center;">49.0</td>
<td style="text-align: center;">17.1</td>
<td style="text-align: center;">30.8</td>
</tr>
<tr>
<td><strong>Qwen-14B</strong></td>
<td style="text-align: center;">71.7</td>
<td style="text-align: center;">67.9</td>
<td style="text-align: center;">51.9</td>
<td style="text-align: center;">70.2</td>
<td style="text-align: center;">62.5</td>
<td style="text-align: center;">61.6</td>
<td style="text-align: center;">25.2</td>
<td style="text-align: center;">53.7</td>
<td style="text-align: center;">32.3</td>
<td style="text-align: center;">39.8</td>
</tr>
<tr>
<td><strong>InternLM-20B</strong></td>
<td style="text-align: center;">58.8</td>
<td style="text-align: center;">62.1</td>
<td style="text-align: center;">44.6</td>
<td style="text-align: center;">59.0</td>
<td style="text-align: center;">45.5</td>
<td style="text-align: center;">52.6</td>
<td style="text-align: center;">7.9</td>
<td style="text-align: center;">52.5</td>
<td style="text-align: center;">25.6</td>
<td style="text-align: center;">35.6</td>
</tr>
<tr>
<td><strong>Aquila2-34B</strong></td>
<td style="text-align: center;">98.5</td>
<td style="text-align: center;">76.0</td>
<td style="text-align: center;">43.8</td>
<td style="text-align: center;">78.5</td>
<td style="text-align: center;">37.8</td>
<td style="text-align: center;">50.0</td>
<td style="text-align: center;">17.8</td>
<td style="text-align: center;">42.5</td>
<td style="text-align: center;">0.0</td>
<td style="text-align: center;">41.0</td>
</tr>
<tr>
<td><strong>Yi-34B</strong></td>
<td style="text-align: center;">81.8</td>
<td style="text-align: center;">76.3</td>
<td style="text-align: center;">56.5</td>
<td style="text-align: center;">82.6</td>
<td style="text-align: center;">68.3</td>
<td style="text-align: center;">67.6</td>
<td style="text-align: center;">15.9</td>
<td style="text-align: center;">66.4</td>
<td style="text-align: center;">26.2</td>
<td style="text-align: center;">38.2</td>
</tr>
<tr>
<td><strong>YAYI2-30B</strong></td>
<td style="text-align: center;">80.9</td>
<td style="text-align: center;"><b>80.5</b></td>
<td style="text-align: center;"><b>62.0</b></td>
<td style="text-align: center;"><b>84.0</b></td>
<td style="text-align: center;">64.4</td>
<td style="text-align: center;"><b>71.2</b></td>
<td style="text-align: center;">14.8</td>
<td style="text-align: center;">54.5</td>
<td style="text-align: center;"><b>53.1</b></td>
<td style="text-align: center;"><b>45.8</b></td>
</tr>
</table>
我们使用 [OpenCompass Github 仓库](https://github.com/open-compass/opencompass) 提供的源代码进行了评测。对于对比模型,我们列出了他们在 [OpenCompass](https://opencompass.org.cn) 榜单上的评测结果,截止日期为 2023年12月15日。对于其他尚未在 [OpenCompass](https://opencompass.org.cn/leaderboard-llm) 平台参与评测的模型,包括 MPT、Falcon 和 LLaMa 2,我们采用了 [LLaMA 2](https://arxiv.org/abs/2307.09288) 报告的结果。
We evaluate our model using the source code from the [OpenCompass Github repository](https://github.com/open-compass/opencompass). If available, we report results for comparative models assessed by OpenCompass with the evaluation reference date set to Dec. 15th, 2013. For MPT, Falcon, and Llama, which have not been evaluated by OpenCompass, we use the results reported in the [LLaMA 2](https://arxiv.org/abs/2307.09288) paper.
## 协议/License
本项目中的代码依照 [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) 协议开源,社区使用 YAYI 2 模型和数据需要遵循[雅意 YAYI 2 模型社区许可协议](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE)。若您需要将雅意 YAYI 2 系列模型或其衍生品用作商业用途,请完整填写[《雅意 YAYI 2 模型商用登记信息》](https://github.com/wenge-research/YAYI2/blob/main/REGISTRATION_INFORMATION),并发送至 [email protected],收到邮件后我们将在3个工作日进行审核,通过审核后您将收到商用许可证,请您在使用过程中严格遵守[《雅意 YAYI 2 模型商用许可协议》](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE)的相关内容,感谢您的配合!
The code in this project is open-sourced under the [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) license. The use of YaYi series model weights and data must adhere to the [YAYI 2 Community License](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE). If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please complete the [YAYI 2 Model Commercial Registration Information](https://github.com/wenge-research/YAYI2/blob/main/REGISTRATION_INFORMATION_EN) and send it to [email protected]. After receiving the email, we will conduct an audit within 3 working days. Once the audit is passed, you will receive a commercial license. Please strictly comply with the relevant content of the [YAYI 2 Model Commercial License Agreement](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE) during the use process. Thank you for your cooperation!
## 引用/Citation
如果您在工作中使用了我们的模型,请引用我们的论文。
If you are using the resource for your work, please cite our paper.
```
@article{YAYI 2,
author = {Yin Luo, Qingchao Kong, Nan Xu, et.al.},
title = {YAYI 2: Multilingual Open Source Large Language Models},
journal = {arXiv preprint arXiv:2312.14862},
url = {https://arxiv.org/abs/2312.14862},
year = {2023}
}
``` |
Mohamed6900/Merged-Model | Mohamed6900 | 2024-03-28T03:23:06Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"gpt2",
"distilgpt2",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-28T03:23:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- gpt2
- distilgpt2
---
# Merged-Model
Merged-Model is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [gpt2](https://huggingface.co/gpt2)
* [distilgpt2](https://huggingface.co/distilgpt2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: gpt2
layer_range: [0, 6]
- model: distilgpt2
layer_range: [0, 6]
merge_method: slerp
base_model: gpt2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
wookidoki/autofix10k | wookidoki | 2024-03-28T03:16:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-03-28T03:16:49Z | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: autofix10k
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# autofix10k
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- _load_in_8bit: True
- _load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
- bnb_4bit_quant_storage: uint8
- load_in_4bit: False
- load_in_8bit: True
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7922 | 0.2 | 20 | 0.5237 |
| 0.5053 | 0.4 | 40 | 0.4857 |
| 0.4071 | 0.6 | 60 | 0.4356 |
| 0.4297 | 0.8 | 80 | 0.4154 |
| 0.5313 | 1.0 | 100 | 0.3827 |
| 0.4814 | 1.2 | 120 | 0.3785 |
| 0.3739 | 1.4 | 140 | 0.3774 |
| 0.3279 | 1.6 | 160 | 0.3761 |
| 0.3149 | 1.8 | 180 | 0.3732 |
| 0.4086 | 2.0 | 200 | 0.3658 |
| 0.3724 | 2.2 | 220 | 0.3664 |
| 0.3691 | 2.4 | 240 | 0.3644 |
| 0.3065 | 2.6 | 260 | 0.3679 |
| 0.2688 | 2.8 | 280 | 0.3767 |
| 0.3431 | 3.0 | 300 | 0.3633 |
| 0.333 | 3.2 | 320 | 0.3641 |
| 0.3052 | 3.4 | 340 | 0.3597 |
| 0.2444 | 3.6 | 360 | 0.3779 |
| 0.2455 | 3.8 | 380 | 0.3712 |
| 0.3078 | 4.0 | 400 | 0.3578 |
| 0.2877 | 4.2 | 420 | 0.3650 |
| 0.2659 | 4.4 | 440 | 0.3731 |
| 0.2496 | 4.6 | 460 | 0.3764 |
| 0.218 | 4.8 | 480 | 0.3781 |
| 0.219 | 5.0 | 500 | 0.3742 |
| 0.2119 | 5.2 | 520 | 0.3808 |
| 0.2435 | 5.4 | 540 | 0.3871 |
| 0.2331 | 5.6 | 560 | 0.3818 |
| 0.1738 | 5.8 | 580 | 0.3758 |
| 0.1772 | 6.0 | 600 | 0.3731 |
| 0.1607 | 6.2 | 620 | 0.4121 |
| 0.1942 | 6.4 | 640 | 0.3943 |
| 0.2312 | 6.6 | 660 | 0.3867 |
| 0.1528 | 6.8 | 680 | 0.4160 |
| 0.1155 | 7.0 | 700 | 0.4100 |
| 0.1495 | 7.2 | 720 | 0.4081 |
| 0.1674 | 7.4 | 740 | 0.4015 |
| 0.1849 | 7.6 | 760 | 0.4075 |
| 0.1231 | 7.8 | 780 | 0.4238 |
| 0.0905 | 8.0 | 800 | 0.4128 |
| 0.1156 | 8.2 | 820 | 0.4278 |
| 0.1628 | 8.4 | 840 | 0.4203 |
| 0.1545 | 8.6 | 860 | 0.4219 |
| 0.1236 | 8.8 | 880 | 0.4294 |
| 0.0799 | 9.0 | 900 | 0.4224 |
| 0.0991 | 9.2 | 920 | 0.4399 |
| 0.1176 | 9.4 | 940 | 0.4350 |
| 0.1711 | 9.6 | 960 | 0.4362 |
| 0.1106 | 9.8 | 980 | 0.4414 |
| 0.0582 | 10.0 | 1000 | 0.4372 |
### Framework versions
- PEFT 0.4.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Mohamed6900/Mistral_zephyrModel | Mohamed6900 | 2024-03-28T03:15:15Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"EleutherAI/gpt-neo-2.7B",
"EleutherAI/gpt-j-6B",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-28T03:15:14Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- EleutherAI/gpt-neo-2.7B
- EleutherAI/gpt-j-6B
---
# Mistral_zephyrModel
Mistral_zephyrModel is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
* [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: EleutherAI/gpt-neo-2.7B
layer_range: [0, 32]
- model: EleutherAI/gpt-j-6B
layer_range: [0, 28]
merge_method: slerp
base_model: EleutherAI/gpt-neo-2.7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
msneubauer/Reinforce-01 | msneubauer | 2024-03-28T03:10:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T03:10:18Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
braunagn/joeyGPT-reward-merged-v1 | braunagn | 2024-03-28T03:08:54Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-classification",
"trl",
"reward-trainer",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-classification | 2024-03-07T16:54:08Z | ---
library_name: transformers
tags:
- trl
- reward-trainer
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Llama-2-7b-chat-hf-gguf | RichardErkhov | 2024-03-28T03:04:01Z | 25 | 1 | null | [
"gguf",
"arxiv:2307.09288",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-27T15:10:10Z | GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Linkedin](https://www.linkedin.com/in/richard-erkhov/)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-chat-hf - GGUF
- Model creator: https://huggingface.co/meta-llama/
- Original model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-chat-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-chat-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-chat-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-chat-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-chat-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-chat-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-chat-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-chat-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-chat-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-chat-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-chat-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-chat-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-chat-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-chat-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-chat-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-chat-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-chat-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-chat-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-chat-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-chat-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-chat-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
USE POLICY
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model:
[github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model:
[developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns:
[facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- en
pipeline_tag: text-generation
arxiv: 2307.09288
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
|70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
|
tsavage68/mpt_1000_STEPS_1e6_rate_05_beta_DPO | tsavage68 | 2024-03-28T02:57:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T02:41:29Z | ---
license: apache-2.0
base_model: mosaicml/mpt-7b-instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mpt_1000_STEPS_1e8_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mpt_1000_STEPS_1e8_rate_03_beta_DPO
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6941
- Rewards/chosen: -1.2875
- Rewards/rejected: -1.6132
- Rewards/accuracies: 0.6154
- Rewards/margins: 0.3257
- Logps/rejected: -24.7839
- Logps/chosen: -23.3672
- Logits/rejected: 14.1648
- Logits/chosen: 14.1681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7473 | 0.1 | 100 | 0.6927 | 0.1811 | 0.1159 | 0.5582 | 0.0651 | -21.3256 | -20.4301 | 14.3166 | 14.3195 |
| 0.7098 | 0.2 | 200 | 0.7624 | 0.6571 | 0.5345 | 0.5714 | 0.1226 | -20.4884 | -19.4780 | 14.1537 | 14.1566 |
| 0.7516 | 0.29 | 300 | 0.7505 | -0.8487 | -1.0927 | 0.5429 | 0.2440 | -23.7428 | -22.4895 | 14.5590 | 14.5620 |
| 0.7762 | 0.39 | 400 | 0.7476 | -2.2343 | -2.4798 | 0.5692 | 0.2455 | -26.5171 | -25.2608 | 14.0064 | 14.0094 |
| 0.8328 | 0.49 | 500 | 0.7228 | -1.5283 | -1.7877 | 0.5736 | 0.2594 | -25.1329 | -23.8488 | 14.1811 | 14.1843 |
| 0.625 | 0.59 | 600 | 0.7006 | -1.3183 | -1.6353 | 0.5978 | 0.3170 | -24.8281 | -23.4288 | 14.3453 | 14.3486 |
| 0.7164 | 0.68 | 700 | 0.7015 | -1.2944 | -1.6029 | 0.6022 | 0.3084 | -24.7632 | -23.3811 | 14.2239 | 14.2271 |
| 0.6844 | 0.78 | 800 | 0.6985 | -1.2758 | -1.5914 | 0.6198 | 0.3157 | -24.7403 | -23.3437 | 14.1630 | 14.1663 |
| 0.6996 | 0.88 | 900 | 0.6971 | -1.2896 | -1.6092 | 0.6110 | 0.3196 | -24.7758 | -23.3713 | 14.1673 | 14.1706 |
| 0.6352 | 0.98 | 1000 | 0.6941 | -1.2875 | -1.6132 | 0.6154 | 0.3257 | -24.7839 | -23.3672 | 14.1648 | 14.1681 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
braunagn/joeyGPT-reward-Lora-v1 | braunagn | 2024-03-28T02:56:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-07T16:52:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ntvcie/Gemma7bVinhntV4 | ntvcie | 2024-03-28T02:47:03Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T02:47:03Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ylv02/navi-bert-base-uncase | ylv02 | 2024-03-28T02:46:00Z | 48 | 0 | transformers | [
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T11:18:55Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: navi-bert-base-uncase
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# navi-bert-base-uncase
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Validation Loss: nan
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| nan | nan | 0 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tsavage68/mpt_1000_STEPS_1e5_rate_05_beta_DPO | tsavage68 | 2024-03-28T02:45:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T02:40:58Z | ---
license: apache-2.0
base_model: mosaicml/mpt-7b-instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mpt_1000_STEPS_1e5_rate_05_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mpt_1000_STEPS_1e5_rate_05_beta_DPO
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1807
- Rewards/chosen: -19.4532
- Rewards/rejected: -19.2274
- Rewards/accuracies: 0.5033
- Rewards/margins: -0.2258
- Logps/rejected: -60.0122
- Logps/chosen: -59.6986
- Logits/rejected: 7.5623
- Logits/chosen: 7.5620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.5203 | 0.05 | 50 | 1.5171 | -1.5689 | -1.4986 | 0.4791 | -0.0703 | -24.5546 | -23.9299 | 14.9602 | 14.9630 |
| 4.4339 | 0.1 | 100 | 2.9117 | -11.0118 | -10.8837 | 0.4813 | -0.1281 | -43.3247 | -42.8158 | 22.8545 | 22.8566 |
| 5.6756 | 0.15 | 150 | 4.3519 | -20.9772 | -20.5347 | 0.4703 | -0.4424 | -62.6269 | -62.7465 | 13.8454 | 13.8456 |
| 3.4587 | 0.2 | 200 | 3.7953 | -20.5135 | -19.9733 | 0.4549 | -0.5402 | -61.5040 | -61.8193 | 9.3162 | 9.3161 |
| 3.1326 | 0.24 | 250 | 4.2192 | -16.2805 | -16.0169 | 0.4857 | -0.2636 | -53.5912 | -53.3533 | 17.4741 | 17.4741 |
| 4.3129 | 0.29 | 300 | 3.2442 | -18.6648 | -18.0875 | 0.4462 | -0.5773 | -57.7325 | -58.1219 | 9.3299 | 9.3300 |
| 4.1056 | 0.34 | 350 | 3.0391 | -19.9243 | -19.4698 | 0.4659 | -0.4545 | -60.4970 | -60.6408 | 13.8852 | 13.8856 |
| 3.4604 | 0.39 | 400 | 3.0915 | -16.3912 | -16.0366 | 0.5055 | -0.3546 | -53.6306 | -53.5745 | 9.7129 | 9.7125 |
| 4.7084 | 0.44 | 450 | 2.7841 | -18.9738 | -18.6116 | 0.4835 | -0.3622 | -58.7806 | -58.7398 | 9.9158 | 9.9143 |
| 4.1944 | 0.49 | 500 | 2.9877 | -22.1479 | -21.8535 | 0.4901 | -0.2944 | -65.2644 | -65.0879 | 10.6479 | 10.6476 |
| 3.8283 | 0.54 | 550 | 2.4650 | -19.8299 | -19.7039 | 0.4989 | -0.1260 | -60.9653 | -60.4520 | 5.6892 | 5.6889 |
| 3.2208 | 0.59 | 600 | 2.3549 | -15.6227 | -15.7624 | 0.5385 | 0.1397 | -53.0822 | -52.0377 | 11.5783 | 11.5782 |
| 2.1741 | 0.64 | 650 | 2.4777 | -19.7204 | -19.3976 | 0.4945 | -0.3228 | -60.3526 | -60.2330 | 10.8601 | 10.8596 |
| 2.8376 | 0.68 | 700 | 2.4241 | -18.3119 | -18.1735 | 0.5055 | -0.1384 | -57.9045 | -57.4161 | 8.0859 | 8.0854 |
| 2.4514 | 0.73 | 750 | 2.2743 | -20.2330 | -20.0266 | 0.5033 | -0.2064 | -61.6106 | -61.2582 | 6.6227 | 6.6223 |
| 1.8899 | 0.78 | 800 | 2.2326 | -19.6323 | -19.3966 | 0.5121 | -0.2358 | -60.3506 | -60.0568 | 7.6793 | 7.6789 |
| 2.435 | 0.83 | 850 | 2.1976 | -19.5253 | -19.2881 | 0.5121 | -0.2372 | -60.1336 | -59.8427 | 7.3698 | 7.3695 |
| 2.7112 | 0.88 | 900 | 2.1806 | -19.4443 | -19.2182 | 0.5011 | -0.2261 | -59.9939 | -59.6808 | 7.5579 | 7.5575 |
| 2.6506 | 0.93 | 950 | 2.1819 | -19.4556 | -19.2275 | 0.5011 | -0.2280 | -60.0125 | -59.7034 | 7.5627 | 7.5623 |
| 1.5392 | 0.98 | 1000 | 2.1807 | -19.4532 | -19.2274 | 0.5033 | -0.2258 | -60.0122 | -59.6986 | 7.5623 | 7.5620 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tung491/dqn-SpaceInvadersNoFrameskip-v4 | tung491 | 2024-03-28T02:42:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T02:41:33Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 382.00 +/- 149.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tung491 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tung491 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tung491
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dranger003/dolphincoder-starcoder2-15b-iMat.GGUF | dranger003 | 2024-03-28T02:19:12Z | 32 | 4 | gguf | [
"gguf",
"text-generation",
"base_model:cognitivecomputations/dolphincoder-starcoder2-15b",
"base_model:quantized:cognitivecomputations/dolphincoder-starcoder2-15b",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-03-07T03:04:46Z | ---
license: bigcode-openrail-m
pipeline_tag: text-generation
library_name: gguf
base_model: cognitivecomputations/dolphincoder-starcoder2-15b
---
<u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
GGUF importance matrix (imatrix) quants for https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (under Q6_K).
> This model is based on StarCoder2-15b and is subject to bigcode-openrail-m license.<br>This Dolphin is really good at coding, I trained with a lot of coding data.<br>This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
| Layers | Context | [Template](https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b#training) |
| --- | --- | --- |
| <pre>40</pre> | <pre>16384</pre> | <pre>\<\|im_start\|\>system<br>You are DolphinCoder, a helpful AI programming assistant.\<\|im_end\|\><br>\<\|im_start\|\>user<br>{prompt}\<\|im_end\|\><br>\<\|im_start\|\>assistant<br> </pre> |
|
Smuggling1710/An4-7Bv2.1 | Smuggling1710 | 2024-03-28T02:13:16Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T02:08:53Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** Smuggling1710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MagmaCode/dqn-SpaceInvadersNoFrameskip-v4-v2 | MagmaCode | 2024-03-28T01:56:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-28T01:56:20Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 136.00 +/- 158.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MagmaCode -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MagmaCode -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MagmaCode
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Parsavares/LuxembourgishSTT | Parsavares | 2024-03-28T01:52:12Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"lb",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-26T16:18:01Z | ---
tags:
- automatic-speech-recognition
- generated_from_trainer
license: mit
language:
- lb
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# parsavares/wav2vec2-base-luxembourgish-STT: A Luxembourgish ASR Model
## Overview
This model utilizes the wav2vec 2.0 architecture, initially pre-trained on 842 hours of unlabeled Luxembourgish speech data from [RTL.lu](https://www.rtl.lu/), followed by fine-tuning on 4 hours of labeled speech from the same domain. Designed to improve automatic speech recognition (ASR) for Luxembourgish, this effort aims to bridge the digital resource gap for the Luxembourgish language, making it more accessible for speech-based applications.
## Model Description
Chosen for its robust performance on speech data, especially where labeled examples are scarce, the wav2vec 2.0 base model was first pre-trained on a large corpus of Luxembourgish speech. It was then fine-tuned with a smaller, annotated dataset specifically for speech recognition tasks. This approach was intended to refine the model's capability to accurately transcribe Luxembourgish speech.
### Performance Metrics
| Metric | Dev Set | Test Set |
|--------|---------|----------|
| WER | 23.95% | 23.09% |
| CER | 7.97% | 7.63% |
### Intended Uses & Limitations
Targeted at researchers, developers, and companies interested in integrating Luxembourgish speech recognition into their services, the model marks a significant advance in Luxembourgish ASR technology. However, its efficacy may vary with the accent, specific jargon, and ambient noise in the audio input.
### Training and Evaluation Data
Additional details on the pre-training and fine-tuning data sets would enrich understanding and facilitate reproduction of results.
## Training Procedure
### Hyperparameters
| Hyperparameter | Value |
|------------------------------|----------------|
| Learning rate | 7.5e-05 |
| Batch size (train/eval) | 3 |
| Seed | 42 |
| Gradient accumulation steps | 4 |
| Total train batch size | 12 |
| Optimizer | Adam (betas=(0.9,0.999), epsilon=1e-08) |
| LR scheduler | Linear, with 2000 warmup steps |
| Epochs | 50 |
| Mixed precision training | Native AMP |
### Software and Libraries
| Software/Library | Version |
|------------------|--------------|
| Transformers | 4.20.0.dev0 |
| PyTorch | 1.11.0+cu113 |
| Datasets | 2.2.1 |
| Tokenizers | 0.12.1 |
## Visualization
(Graph of training loss over epochs and comparison of WER and CER on Dev vs. Test datasets to be added here)
## Citation
Please cite the following if you use this model in your work:
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
```
|
JinbiaoZhu/finetuned-DebertaV3-imdb-TextClassification | JinbiaoZhu | 2024-03-28T01:41:17Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-28T00:57:34Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-DebertaV3-imdb-TextClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-DebertaV3-imdb-TextClassification
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1329
- Accuracy: 0.9631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1543 | 1.0 | 1042 | 0.1108 | 0.9592 |
| 0.0929 | 2.0 | 2084 | 0.1329 | 0.9631 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tsavage68/mpt_1000_STEPS_1e6_rate_03_beta_DPO | tsavage68 | 2024-03-28T01:38:52Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T04:21:55Z | ---
license: apache-2.0
base_model: mosaicml/mpt-7b-instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: v1_1000_STEPS_1e6_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1_1000_STEPS_1e6_rate_03_beta_DPO
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6641
- Rewards/chosen: -1.4066
- Rewards/rejected: -1.6576
- Rewards/accuracies: 0.6198
- Rewards/margins: 0.2510
- Logps/rejected: -27.0829
- Logps/chosen: -25.4808
- Logits/rejected: 13.3887
- Logits/chosen: 13.3921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6901 | 0.05 | 50 | 0.6931 | 0.0510 | 0.0490 | 0.5253 | 0.0019 | -21.3940 | -20.6223 | 14.3181 | 14.3207 |
| 0.7257 | 0.1 | 100 | 0.6841 | 0.0934 | 0.0501 | 0.5692 | 0.0433 | -21.3906 | -20.4809 | 14.1613 | 14.1641 |
| 0.7259 | 0.15 | 150 | 0.6925 | -0.0147 | -0.0834 | 0.5451 | 0.0688 | -21.8355 | -20.8411 | 13.9200 | 13.9229 |
| 0.6593 | 0.2 | 200 | 0.7118 | 0.4903 | 0.3962 | 0.5802 | 0.0941 | -20.2368 | -19.1579 | 13.7791 | 13.7821 |
| 0.7282 | 0.24 | 250 | 0.7093 | -1.2326 | -1.3686 | 0.5648 | 0.1360 | -26.1195 | -24.9010 | 13.8037 | 13.8067 |
| 0.6924 | 0.29 | 300 | 0.6944 | -0.7898 | -0.9655 | 0.5626 | 0.1757 | -24.7758 | -23.4250 | 14.0496 | 14.0528 |
| 0.7523 | 0.34 | 350 | 0.6909 | -0.9371 | -1.1226 | 0.5626 | 0.1855 | -25.2994 | -23.9158 | 14.0003 | 14.0037 |
| 0.7276 | 0.39 | 400 | 0.6918 | -1.8471 | -2.0415 | 0.5868 | 0.1944 | -28.3625 | -26.9492 | 13.3382 | 13.3414 |
| 0.6255 | 0.44 | 450 | 0.6860 | -1.5470 | -1.7599 | 0.5934 | 0.2129 | -27.4236 | -25.9489 | 13.2551 | 13.2584 |
| 0.7342 | 0.49 | 500 | 0.6801 | -1.5841 | -1.7888 | 0.5758 | 0.2046 | -27.5199 | -26.0726 | 13.4186 | 13.4219 |
| 0.568 | 0.54 | 550 | 0.6694 | -1.5101 | -1.7458 | 0.6022 | 0.2356 | -27.3766 | -25.8260 | 13.5776 | 13.5810 |
| 0.6217 | 0.59 | 600 | 0.6645 | -1.4050 | -1.6543 | 0.6110 | 0.2492 | -27.0716 | -25.4756 | 13.6337 | 13.6371 |
| 0.6186 | 0.64 | 650 | 0.6682 | -1.3826 | -1.6291 | 0.5978 | 0.2465 | -26.9876 | -25.4007 | 13.4204 | 13.4237 |
| 0.6637 | 0.68 | 700 | 0.6633 | -1.3994 | -1.6501 | 0.6220 | 0.2507 | -27.0576 | -25.4569 | 13.4574 | 13.4608 |
| 0.7482 | 0.73 | 750 | 0.6632 | -1.3772 | -1.6269 | 0.6198 | 0.2497 | -26.9804 | -25.3829 | 13.4047 | 13.4081 |
| 0.6597 | 0.78 | 800 | 0.6627 | -1.3970 | -1.6527 | 0.6198 | 0.2557 | -27.0664 | -25.4489 | 13.3914 | 13.3948 |
| 0.7206 | 0.83 | 850 | 0.6613 | -1.4018 | -1.6593 | 0.6220 | 0.2575 | -27.0885 | -25.4648 | 13.3862 | 13.3896 |
| 0.6715 | 0.88 | 900 | 0.6633 | -1.4047 | -1.6584 | 0.6220 | 0.2537 | -27.0856 | -25.4746 | 13.3969 | 13.4003 |
| 0.6108 | 0.93 | 950 | 0.6633 | -1.4042 | -1.6585 | 0.6242 | 0.2543 | -27.0857 | -25.4727 | 13.3883 | 13.3917 |
| 0.5964 | 0.98 | 1000 | 0.6641 | -1.4066 | -1.6576 | 0.6198 | 0.2510 | -27.0829 | -25.4808 | 13.3887 | 13.3921 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SimoneJLaudani/trainerH2 | SimoneJLaudani | 2024-03-28T01:28:41Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-26T12:49:44Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: trainerH2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainerH2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4920
- Precision: 0.4067
- Recall: 0.3978
- F1: 0.3953
- Accuracy: 0.3978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.9606 | 0.14 | 30 | 1.9448 | 0.0204 | 0.1429 | 0.0357 | 0.1429 |
| 1.9441 | 0.27 | 60 | 1.9333 | 0.2001 | 0.1569 | 0.0797 | 0.1569 |
| 1.9321 | 0.41 | 90 | 1.9106 | 0.2762 | 0.1905 | 0.1308 | 0.1905 |
| 1.8952 | 0.54 | 120 | 1.8562 | 0.1136 | 0.2493 | 0.1410 | 0.2493 |
| 1.8421 | 0.68 | 150 | 1.7740 | 0.1981 | 0.2689 | 0.1771 | 0.2689 |
| 1.7692 | 0.81 | 180 | 1.7638 | 0.1872 | 0.2493 | 0.1821 | 0.2493 |
| 1.7503 | 0.95 | 210 | 1.7322 | 0.1958 | 0.2745 | 0.1909 | 0.2745 |
| 1.6431 | 1.08 | 240 | 1.7174 | 0.2711 | 0.2801 | 0.2022 | 0.2801 |
| 1.5781 | 1.22 | 270 | 1.7194 | 0.2918 | 0.2857 | 0.2271 | 0.2857 |
| 1.6173 | 1.35 | 300 | 1.7026 | 0.3020 | 0.3025 | 0.2288 | 0.3025 |
| 1.6257 | 1.49 | 330 | 1.6847 | 0.2873 | 0.3165 | 0.2549 | 0.3165 |
| 1.5856 | 1.62 | 360 | 1.6398 | 0.3525 | 0.3165 | 0.2837 | 0.3165 |
| 1.5168 | 1.76 | 390 | 1.6489 | 0.3383 | 0.3333 | 0.3056 | 0.3333 |
| 1.493 | 1.89 | 420 | 1.6104 | 0.2815 | 0.3249 | 0.2765 | 0.3249 |
| 1.5084 | 2.03 | 450 | 1.5793 | 0.3815 | 0.3782 | 0.3599 | 0.3782 |
| 1.2633 | 2.16 | 480 | 1.5386 | 0.4022 | 0.3894 | 0.3577 | 0.3894 |
| 1.2758 | 2.3 | 510 | 1.6491 | 0.4033 | 0.3782 | 0.3643 | 0.3782 |
| 1.2099 | 2.43 | 540 | 1.5144 | 0.4240 | 0.4398 | 0.4184 | 0.4398 |
| 1.2189 | 2.57 | 570 | 1.5441 | 0.3683 | 0.3697 | 0.3603 | 0.3697 |
| 1.1147 | 2.7 | 600 | 1.5031 | 0.3840 | 0.3866 | 0.3751 | 0.3866 |
| 1.1775 | 2.84 | 630 | 1.4929 | 0.3773 | 0.3922 | 0.3805 | 0.3922 |
| 1.0987 | 2.97 | 660 | 1.4932 | 0.4060 | 0.3978 | 0.3950 | 0.3978 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lunarsylph/stablecell_v7 | lunarsylph | 2024-03-28T01:26:00Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T01:20:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gemmathon/gemma-2b-it-qlora-v2 | gemmathon | 2024-03-28T01:25:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T01:24:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ijwatson98/sft-gpt2-xsum-2703-tldr | ijwatson98 | 2024-03-28T01:21:40Z | 170 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T19:03:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayukat1016/bert-base-japanese-v3-wrime-sentiment | ayukat1016 | 2024-03-28T01:20:48Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-28T01:20:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adasgaleus/BIM-0.75 | adasgaleus | 2024-03-28T01:16:14Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-28T01:15:52Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 20240327180321_happy_vaswani
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240327180321_happy_vaswani
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0288
- Precision: 0.9791
- Recall: 0.9836
- F1: 0.9813
- Accuracy: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0627 | 0.09 | 300 | 0.0535 | 0.9603 | 0.9629 | 0.9616 | 0.9809 |
| 0.0571 | 0.17 | 600 | 0.0485 | 0.9625 | 0.9685 | 0.9655 | 0.9827 |
| 0.0523 | 0.26 | 900 | 0.0451 | 0.9639 | 0.9721 | 0.9680 | 0.9840 |
| 0.0498 | 0.35 | 1200 | 0.0452 | 0.9659 | 0.9700 | 0.9680 | 0.9841 |
| 0.0498 | 0.44 | 1500 | 0.0440 | 0.9675 | 0.9717 | 0.9696 | 0.9849 |
| 0.0487 | 0.52 | 1800 | 0.0429 | 0.9674 | 0.9714 | 0.9694 | 0.9848 |
| 0.0485 | 0.61 | 2100 | 0.0431 | 0.9668 | 0.9733 | 0.9700 | 0.9850 |
| 0.0468 | 0.7 | 2400 | 0.0410 | 0.9672 | 0.9745 | 0.9709 | 0.9855 |
| 0.0469 | 0.78 | 2700 | 0.0412 | 0.9671 | 0.9754 | 0.9713 | 0.9857 |
| 0.0473 | 0.87 | 3000 | 0.0419 | 0.9678 | 0.9731 | 0.9704 | 0.9853 |
| 0.0455 | 0.96 | 3300 | 0.0415 | 0.9674 | 0.9756 | 0.9715 | 0.9857 |
| 0.0417 | 1.04 | 3600 | 0.0404 | 0.9674 | 0.9763 | 0.9718 | 0.9859 |
| 0.0428 | 1.13 | 3900 | 0.0410 | 0.9683 | 0.9755 | 0.9719 | 0.9860 |
| 0.0421 | 1.22 | 4200 | 0.0400 | 0.9691 | 0.9750 | 0.9721 | 0.9861 |
| 0.0412 | 1.31 | 4500 | 0.0403 | 0.9681 | 0.9763 | 0.9722 | 0.9861 |
| 0.0411 | 1.39 | 4800 | 0.0384 | 0.9706 | 0.9764 | 0.9735 | 0.9869 |
| 0.0401 | 1.48 | 5100 | 0.0381 | 0.9697 | 0.9772 | 0.9734 | 0.9867 |
| 0.0399 | 1.57 | 5400 | 0.0373 | 0.9711 | 0.9759 | 0.9735 | 0.9869 |
| 0.0398 | 1.65 | 5700 | 0.0367 | 0.9703 | 0.9780 | 0.9742 | 0.9871 |
| 0.0393 | 1.74 | 6000 | 0.0374 | 0.9687 | 0.9783 | 0.9735 | 0.9869 |
| 0.039 | 1.83 | 6300 | 0.0359 | 0.9709 | 0.9781 | 0.9745 | 0.9873 |
| 0.0386 | 1.92 | 6600 | 0.0361 | 0.9711 | 0.9780 | 0.9746 | 0.9873 |
| 0.0376 | 2.0 | 6900 | 0.0362 | 0.9717 | 0.9784 | 0.9750 | 0.9876 |
| 0.0346 | 2.09 | 7200 | 0.0359 | 0.9712 | 0.9790 | 0.9751 | 0.9876 |
| 0.0344 | 2.18 | 7500 | 0.0345 | 0.9730 | 0.9785 | 0.9757 | 0.9880 |
| 0.0335 | 2.26 | 7800 | 0.0340 | 0.9725 | 0.9789 | 0.9757 | 0.9880 |
| 0.0337 | 2.35 | 8100 | 0.0344 | 0.9722 | 0.9795 | 0.9758 | 0.9880 |
| 0.0336 | 2.44 | 8400 | 0.0344 | 0.9721 | 0.9806 | 0.9763 | 0.9883 |
| 0.033 | 2.53 | 8700 | 0.0342 | 0.9734 | 0.9792 | 0.9763 | 0.9883 |
| 0.0331 | 2.61 | 9000 | 0.0345 | 0.9736 | 0.9792 | 0.9764 | 0.9883 |
| 0.0329 | 2.7 | 9300 | 0.0331 | 0.9727 | 0.9808 | 0.9767 | 0.9884 |
| 0.032 | 2.79 | 9600 | 0.0332 | 0.9731 | 0.9808 | 0.9769 | 0.9886 |
| 0.0323 | 2.87 | 9900 | 0.0321 | 0.9740 | 0.9808 | 0.9774 | 0.9888 |
| 0.0314 | 2.96 | 10200 | 0.0322 | 0.9748 | 0.9805 | 0.9776 | 0.9889 |
| 0.0275 | 3.05 | 10500 | 0.0327 | 0.9750 | 0.9800 | 0.9775 | 0.9888 |
| 0.0275 | 3.13 | 10800 | 0.0330 | 0.9736 | 0.9810 | 0.9773 | 0.9888 |
| 0.0272 | 3.22 | 11100 | 0.0321 | 0.9753 | 0.9816 | 0.9784 | 0.9893 |
| 0.0272 | 3.31 | 11400 | 0.0319 | 0.9749 | 0.9810 | 0.9779 | 0.9891 |
| 0.0269 | 3.4 | 11700 | 0.0305 | 0.9758 | 0.9810 | 0.9784 | 0.9893 |
| 0.027 | 3.48 | 12000 | 0.0303 | 0.9762 | 0.9814 | 0.9788 | 0.9895 |
| 0.0267 | 3.57 | 12300 | 0.0300 | 0.9764 | 0.9819 | 0.9792 | 0.9897 |
| 0.0263 | 3.66 | 12600 | 0.0297 | 0.9766 | 0.9818 | 0.9792 | 0.9898 |
| 0.0261 | 3.74 | 12900 | 0.0296 | 0.9766 | 0.9824 | 0.9795 | 0.9899 |
| 0.0255 | 3.83 | 13200 | 0.0294 | 0.9775 | 0.9827 | 0.9801 | 0.9902 |
| 0.0254 | 3.92 | 13500 | 0.0289 | 0.9774 | 0.9828 | 0.9801 | 0.9902 |
| 0.0234 | 4.01 | 13800 | 0.0302 | 0.9775 | 0.9826 | 0.9801 | 0.9901 |
| 0.0207 | 4.09 | 14100 | 0.0303 | 0.9773 | 0.9823 | 0.9798 | 0.9900 |
| 0.0205 | 4.18 | 14400 | 0.0299 | 0.9779 | 0.9825 | 0.9802 | 0.9903 |
| 0.0205 | 4.27 | 14700 | 0.0296 | 0.9781 | 0.9828 | 0.9804 | 0.9903 |
| 0.0205 | 4.35 | 15000 | 0.0291 | 0.9785 | 0.9831 | 0.9808 | 0.9906 |
| 0.0201 | 4.44 | 15300 | 0.0294 | 0.9781 | 0.9830 | 0.9805 | 0.9904 |
| 0.0198 | 4.53 | 15600 | 0.0290 | 0.9784 | 0.9831 | 0.9807 | 0.9905 |
| 0.0199 | 4.62 | 15900 | 0.0293 | 0.9781 | 0.9835 | 0.9808 | 0.9905 |
| 0.0199 | 4.7 | 16200 | 0.0291 | 0.9789 | 0.9835 | 0.9812 | 0.9907 |
| 0.0195 | 4.79 | 16500 | 0.0293 | 0.9788 | 0.9835 | 0.9811 | 0.9907 |
| 0.0196 | 4.88 | 16800 | 0.0290 | 0.9787 | 0.9835 | 0.9811 | 0.9907 |
| 0.0196 | 4.96 | 17100 | 0.0288 | 0.9791 | 0.9836 | 0.9813 | 0.9908 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.0a0+6a974be
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Changgil/K2S3-Mistral-7bx2-48layers_v1.2 | Changgil | 2024-03-28T01:15:33Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:36:11Z | ---
license: cc-by-nc-4.0
language:
- en
- ko
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7bx2-48layers_v1.2
## Merge Method
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model:
model:
path: Changgil/K2S3-Mistral-7b-v1.2
- sources:
- layer_range: [8, 32]
model:
model:
path: Changgil/K2S3-Mistral-7b-v1.2
|
togethercomputer/Llama-2-7B-32K-Instruct | togethercomputer | 2024-03-28T01:13:47Z | 6,853 | 159 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:togethercomputer/llama-instruct",
"arxiv:2307.03172",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-08T20:22:27Z | ---
license: llama2
language:
- en
library_name: transformers
datasets:
- togethercomputer/llama-instruct
---
# Llama-2-7B-32K-Instruct
## Model Description
Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
## Data Collection Details
Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
2. **Long-context Summarization and Long-context QA**.
We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
## Model Usage
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
```
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
You can load the model directly from the Hugging Face model hub using
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
trust_remote_code=True, torch_dtype=torch.float16)
input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
output = model.generate(input_ids, max_length=128,
temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
```
[INST]\n<your instruction here>\n[\INST]\n\n
```
For example, if we query the model with
```
[INST]\nWrite a poem about cats\n[\INST]\n\n
```
the model will return
```
[INST]
Write a poem about cats
[/INST]
Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
Cats, oh cats, how can I describe you?
Your beauty is beyond compare, it seems.
You're graceful and elegant, like a ballerina's dance,
But don't let your charm fool you, for you're not easily tamed.
With your soft purring and playful meows,
You draw us in with your enchanting powers.
We love to watch you play, your tail twirling 'round,
As if you're dancing on air, with no sound.
But don't be fooled by your sweetness, my friend,
For beneath that gentle exterior, lies a fierce defender.
When danger lurks, you'll spring into action,
Protecting those you hold dear, without question.
Solet us admire you, from afar,
For in your own way, you're truly unique, a star.
And though we may never fully understand,
The depths of your soul, we'll always stand, hand in paw, as one.
This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
```
## Model Evaluation
We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
We compare with models including
[GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
[https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
[Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
We summarize the results below:
* Alpaca Eval
| Model | win_rate | standard_error | n_total | avg_length |
| -------- | ------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
| Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
| oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
| text_davinci_003 | 50.00 | 0.00 | 805 | 307|
| falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
| alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
| alpaca-7b | 26.46 | 1.54 | 805 | 396 |
| text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
* Rouge Score over BookSum
| Model | R1 | R2 | RL |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
| Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
| Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
| GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
| Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
* Accuracy over MQA
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
| Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
| Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
| GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
| Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
## Limitations and Bias
As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
2hs/mistralai-Code-Instruct-Finetune-test | 2hs | 2024-03-28T01:09:00Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-22T07:38:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MM2157/AraBERT_token_classification__AraEval24 | MM2157 | 2024-03-28T01:08:17Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-27T16:27:52Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: AraBERT_token_classification__AraEval24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraBERT_token_classification__AraEval24
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8744
- Precision: 0.1001
- Recall: 0.0230
- F1: 0.0374
- Accuracy: 0.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6497 | 1.0 | 2851 | 0.7614 | 0.0769 | 0.0007 | 0.0015 | 0.8631 |
| 0.5817 | 2.0 | 5702 | 0.8128 | 0.1441 | 0.0020 | 0.0039 | 0.8635 |
| 0.5328 | 3.0 | 8553 | 0.7802 | 0.1538 | 0.0007 | 0.0015 | 0.8634 |
| 0.5006 | 4.0 | 11404 | 0.7901 | 0.1269 | 0.0021 | 0.0041 | 0.8633 |
| 0.4445 | 5.0 | 14255 | 0.8134 | 0.1038 | 0.0014 | 0.0027 | 0.8634 |
| 0.4261 | 6.0 | 17106 | 0.8102 | 0.1135 | 0.0124 | 0.0223 | 0.8623 |
| 0.4081 | 7.0 | 19957 | 0.8238 | 0.1029 | 0.0131 | 0.0233 | 0.8624 |
| 0.3831 | 8.0 | 22808 | 0.8346 | 0.0913 | 0.0139 | 0.0241 | 0.8593 |
| 0.3525 | 9.0 | 25659 | 0.8433 | 0.1044 | 0.0246 | 0.0399 | 0.8601 |
| 0.3471 | 10.0 | 28510 | 0.8744 | 0.1001 | 0.0230 | 0.0374 | 0.8601 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.2
- Tokenizers 0.13.3
|
giantdev/zeta-Olenekianh62 | giantdev | 2024-03-28T01:07:45Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:42:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
giantdev/zeta-Olenekianh61 | giantdev | 2024-03-28T01:07:12Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:40:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sanchit-gandhi/distil-large-v3-hi-ft | sanchit-gandhi | 2024-03-28T01:05:47Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_1",
"base_model:distil-whisper/distil-large-v3",
"base_model:finetune:distil-whisper/distil-large-v3",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-27T11:00:41Z | ---
license: mit
base_model: distil-whisper/distil-large-v3
tags:
- generated_from_trainer
datasets:
- common_voice_16_1
metrics:
- wer
model-index:
- name: distil-whisper/distil-large-v3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_1
type: common_voice_16_1
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 0.3297535347291973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-whisper/distil-large-v3
This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the common_voice_16_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6148
- Wer: 0.3298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.125 | 4.5 | 1000 | 0.4658 | 0.4300 |
| 0.0412 | 9.01 | 2000 | 0.5247 | 0.3960 |
| 0.0077 | 13.51 | 3000 | 0.5476 | 0.3535 |
| 0.0007 | 18.02 | 4000 | 0.5731 | 0.3398 |
| 0.0001 | 22.52 | 5000 | 0.6148 | 0.3298 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
tsavage68/mpt_1000_STEPS_1e8_rate03_beta_DPO | tsavage68 | 2024-03-28T00:56:36Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"custom_code",
"base_model:mosaicml/mpt-7b-instruct",
"base_model:finetune:mosaicml/mpt-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T22:01:48Z | ---
license: apache-2.0
base_model: mosaicml/mpt-7b-instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mpt_1000_STEPS_1e5_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mpt_1000_STEPS_1e5_rate_03_beta_DPO
This model is a fine-tuned version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Rewards/chosen: -0.0008
- Rewards/rejected: -0.0019
- Rewards/accuracies: 0.5187
- Rewards/margins: 0.0011
- Logps/rejected: -21.5638
- Logps/chosen: -20.7947
- Logits/rejected: 14.2524
- Logits/chosen: 14.2550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6965 | 0.1 | 100 | 0.6951 | -0.0017 | 0.0013 | 0.4681 | -0.0029 | -21.5532 | -20.7977 | 14.2557 | 14.2583 |
| 0.6918 | 0.2 | 200 | 0.6942 | -0.0054 | -0.0044 | 0.5011 | -0.0010 | -21.5722 | -20.8104 | 14.2575 | 14.2601 |
| 0.6965 | 0.29 | 300 | 0.6941 | -0.0016 | -0.0010 | 0.4945 | -0.0006 | -21.5608 | -20.7975 | 14.2549 | 14.2575 |
| 0.6906 | 0.39 | 400 | 0.6946 | 0.0001 | 0.0020 | 0.4747 | -0.0019 | -21.5507 | -20.7919 | 14.2494 | 14.2520 |
| 0.6883 | 0.49 | 500 | 0.6972 | -0.0019 | 0.0050 | 0.4484 | -0.0069 | -21.5408 | -20.7986 | 14.2521 | 14.2547 |
| 0.6867 | 0.59 | 600 | 0.6969 | -0.0054 | 0.0010 | 0.4418 | -0.0064 | -21.5541 | -20.8103 | 14.2502 | 14.2528 |
| 0.6937 | 0.68 | 700 | 0.6939 | 0.0015 | 0.0020 | 0.5275 | -0.0005 | -21.5508 | -20.7871 | 14.2547 | 14.2573 |
| 0.6855 | 0.78 | 800 | 0.6933 | -0.0008 | -0.0017 | 0.5099 | 0.0009 | -21.5631 | -20.7947 | 14.2522 | 14.2548 |
| 0.6918 | 0.88 | 900 | 0.6933 | -0.0008 | -0.0019 | 0.5187 | 0.0011 | -21.5638 | -20.7947 | 14.2524 | 14.2550 |
| 0.6957 | 0.98 | 1000 | 0.6933 | -0.0008 | -0.0019 | 0.5187 | 0.0011 | -21.5638 | -20.7947 | 14.2524 | 14.2550 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
michelebasilico/itaca_7b_mistral_4bit | michelebasilico | 2024-03-28T00:54:58Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-03-24T09:03:14Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** michelebasilico
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thrunlab/relu_llama_7b_hf_fp16_refined_web_relu_2024-03-27 | thrunlab | 2024-03-28T00:53:26Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"sparse_llama",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:21:17Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: relu_llama_7b_hf_fp16_refined_web_relu_2024-03-27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relu_llama_7b_hf_fp16_refined_web_relu_2024-03-27
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.1534 | 0.01 | 25 | 9.9183 |
| 8.7138 | 0.02 | 50 | 8.3260 |
| 7.3744 | 0.02 | 75 | 7.3115 |
| 6.2344 | 0.03 | 100 | 6.1079 |
| 5.5305 | 0.04 | 125 | 5.1969 |
| 4.5244 | 0.05 | 150 | 4.5551 |
| 4.0661 | 0.06 | 175 | 4.1037 |
| 3.8614 | 0.06 | 200 | 3.7818 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2
|
gagan3012/Multilingual-mistral | gagan3012 | 2024-03-28T00:47:38Z | 1,382 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"openchat/openchat-3.5-0106",
"giux78/zefiro-7b-beta-ITA-v0.1",
"azale-ai/Starstreak-7b-beta",
"gagan3012/Mistral_arabic_dpo",
"davidkim205/komt-mistral-7b-v1",
"OpenBuddy/openbuddy-zephyr-7b-v14.1",
"manishiitg/open-aditi-hi-v1",
"VAGOsolutions/SauerkrautLM-7b-v1-mistral",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-15T23:20:29Z | ---
license: apache-2.0
tags:
- moe
- mixtral
- openchat/openchat-3.5-0106
- giux78/zefiro-7b-beta-ITA-v0.1
- azale-ai/Starstreak-7b-beta
- gagan3012/Mistral_arabic_dpo
- davidkim205/komt-mistral-7b-v1
- OpenBuddy/openbuddy-zephyr-7b-v14.1
- manishiitg/open-aditi-hi-v1
- VAGOsolutions/SauerkrautLM-7b-v1-mistral
model-index:
- name: Multilingual-mistral
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gagan3012/Multilingual-mistral
name: Open LLM Leaderboard
---
# Multilingual-mistral
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [giux78/zefiro-7b-beta-ITA-v0.1](https://huggingface.co/giux78/zefiro-7b-beta-ITA-v0.1)
* [azale-ai/Starstreak-7b-beta](https://huggingface.co/azale-ai/Starstreak-7b-beta)
* [gagan3012/Mistral_arabic_dpo](https://huggingface.co/gagan3012/Mistral_arabic_dpo)
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
* [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1)
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
## 🧩 Configuration
```yamlbase_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16
experts:
- positive_prompts:
- chat
- assistant
- tell me
- explain
source_model: openchat/openchat-3.5-0106
- positive_prompts:
- chat
- assistant
- tell me
- explain
source_model: giux78/zefiro-7b-beta-ITA-v0.1
- positive_prompts:
- indonesian
- indonesia
- answer in indonesian
source_model: azale-ai/Starstreak-7b-beta
- positive_prompts:
- arabic
- arab
- arabia
- answer in arabic
source_model: gagan3012/Mistral_arabic_dpo
- positive_prompts:
- korean
- answer in korean
- korea
source_model: davidkim205/komt-mistral-7b-v1
- positive_prompts:
- chinese
- china
- answer in chinese
source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1
- positive_prompts:
- hindi
- india
- hindu
- answer in hindi
source_model: manishiitg/open-aditi-hi-v1
- positive_prompts:
- german
- germany
- answer in german
- deutsch
source_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
gate_mode: hidden
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/Multilingual-mistral"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__Multilingual-mistral)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.79|
|AI2 Reasoning Challenge (25-Shot)|62.29|
|HellaSwag (10-Shot) |81.76|
|MMLU (5-Shot) |61.38|
|TruthfulQA (0-shot) |55.53|
|Winogrande (5-shot) |75.53|
|GSM8k (5-shot) |40.26|
|
jlbaker361/compare-classifier-all | jlbaker361 | 2024-03-28T00:38:19Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-03-28T00:38:16Z | ---
{}
---
# DDPO trained model
num_epochs=20
train_gradient_accumulation_steps=1
sample_num_steps=30
sample_batch_size=8
train_batch_size=8
sample_num_batches_per_epoch=32
based off of stabilityai/stable-diffusion-2-base
and then trained off of None
|
alvwjy/Llama2-MDL-pretrained-issue-docs | alvwjy | 2024-03-28T00:37:16Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:28:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adasgaleus/LIM-0.25 | adasgaleus | 2024-03-28T00:35:34Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-28T00:35:15Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 20240327184156_red_mikolov
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240327184156_red_mikolov
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0292
- Precision: 0.9583
- Recall: 0.9476
- F1: 0.9529
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0569 | 0.09 | 300 | 0.0439 | 0.9281 | 0.9188 | 0.9234 | 0.9828 |
| 0.0602 | 0.18 | 600 | 0.0475 | 0.9297 | 0.9080 | 0.9187 | 0.9817 |
| 0.0586 | 0.27 | 900 | 0.0455 | 0.9277 | 0.9117 | 0.9196 | 0.9820 |
| 0.0566 | 0.36 | 1200 | 0.0454 | 0.9269 | 0.9116 | 0.9192 | 0.9821 |
| 0.056 | 0.44 | 1500 | 0.0460 | 0.9362 | 0.9050 | 0.9204 | 0.9824 |
| 0.0549 | 0.53 | 1800 | 0.0430 | 0.9277 | 0.9244 | 0.9260 | 0.9831 |
| 0.0526 | 0.62 | 2100 | 0.0404 | 0.9326 | 0.9218 | 0.9272 | 0.9837 |
| 0.0523 | 0.71 | 2400 | 0.0413 | 0.9313 | 0.9243 | 0.9278 | 0.9836 |
| 0.0524 | 0.8 | 2700 | 0.0402 | 0.9410 | 0.9136 | 0.9271 | 0.9840 |
| 0.0517 | 0.89 | 3000 | 0.0413 | 0.9354 | 0.9198 | 0.9275 | 0.9837 |
| 0.0512 | 0.98 | 3300 | 0.0411 | 0.9360 | 0.9148 | 0.9253 | 0.9836 |
| 0.0442 | 1.07 | 3600 | 0.0406 | 0.9297 | 0.9285 | 0.9291 | 0.9840 |
| 0.0457 | 1.15 | 3900 | 0.0421 | 0.9412 | 0.9161 | 0.9285 | 0.9840 |
| 0.0452 | 1.24 | 4200 | 0.0400 | 0.9343 | 0.9281 | 0.9312 | 0.9847 |
| 0.0444 | 1.33 | 4500 | 0.0381 | 0.9341 | 0.9329 | 0.9335 | 0.9848 |
| 0.0434 | 1.42 | 4800 | 0.0376 | 0.9409 | 0.9268 | 0.9338 | 0.9850 |
| 0.0437 | 1.51 | 5100 | 0.0385 | 0.9407 | 0.9224 | 0.9315 | 0.9847 |
| 0.0424 | 1.6 | 5400 | 0.0364 | 0.9437 | 0.9268 | 0.9352 | 0.9855 |
| 0.042 | 1.69 | 5700 | 0.0370 | 0.9445 | 0.9247 | 0.9345 | 0.9853 |
| 0.0422 | 1.78 | 6000 | 0.0361 | 0.9408 | 0.9320 | 0.9364 | 0.9854 |
| 0.0413 | 1.86 | 6300 | 0.0354 | 0.9426 | 0.9303 | 0.9364 | 0.9857 |
| 0.0406 | 1.95 | 6600 | 0.0353 | 0.9408 | 0.9326 | 0.9367 | 0.9860 |
| 0.0336 | 2.04 | 6900 | 0.0353 | 0.9438 | 0.9342 | 0.9390 | 0.9862 |
| 0.0338 | 2.13 | 7200 | 0.0362 | 0.9500 | 0.9227 | 0.9362 | 0.9860 |
| 0.0341 | 2.22 | 7500 | 0.0356 | 0.9428 | 0.9325 | 0.9376 | 0.9861 |
| 0.0333 | 2.31 | 7800 | 0.0348 | 0.9423 | 0.9350 | 0.9386 | 0.9863 |
| 0.0344 | 2.4 | 8100 | 0.0337 | 0.9454 | 0.9368 | 0.9411 | 0.9869 |
| 0.0338 | 2.49 | 8400 | 0.0336 | 0.9486 | 0.9360 | 0.9422 | 0.9869 |
| 0.0334 | 2.57 | 8700 | 0.0336 | 0.9482 | 0.9332 | 0.9407 | 0.9866 |
| 0.0325 | 2.66 | 9000 | 0.0333 | 0.9491 | 0.9336 | 0.9413 | 0.9868 |
| 0.0323 | 2.75 | 9300 | 0.0320 | 0.9467 | 0.9382 | 0.9424 | 0.9873 |
| 0.0318 | 2.84 | 9600 | 0.0329 | 0.9531 | 0.9267 | 0.9397 | 0.9867 |
| 0.0316 | 2.93 | 9900 | 0.0314 | 0.9497 | 0.9372 | 0.9434 | 0.9874 |
| 0.0246 | 3.02 | 10200 | 0.0336 | 0.9510 | 0.9374 | 0.9441 | 0.9874 |
| 0.0246 | 3.11 | 10500 | 0.0313 | 0.9513 | 0.9435 | 0.9474 | 0.9880 |
| 0.0242 | 3.2 | 10800 | 0.0329 | 0.9500 | 0.9376 | 0.9437 | 0.9876 |
| 0.0248 | 3.29 | 11100 | 0.0313 | 0.9544 | 0.9364 | 0.9453 | 0.9881 |
| 0.0244 | 3.37 | 11400 | 0.0318 | 0.9509 | 0.9429 | 0.9469 | 0.9879 |
| 0.0244 | 3.46 | 11700 | 0.0302 | 0.9546 | 0.9417 | 0.9481 | 0.9882 |
| 0.0245 | 3.55 | 12000 | 0.0308 | 0.9504 | 0.9384 | 0.9444 | 0.9879 |
| 0.0237 | 3.64 | 12300 | 0.0304 | 0.9510 | 0.9401 | 0.9455 | 0.9880 |
| 0.0236 | 3.73 | 12600 | 0.0301 | 0.9572 | 0.9367 | 0.9468 | 0.9881 |
| 0.0232 | 3.82 | 12900 | 0.0299 | 0.9560 | 0.9417 | 0.9488 | 0.9884 |
| 0.0231 | 3.91 | 13200 | 0.0288 | 0.9555 | 0.9446 | 0.9500 | 0.9886 |
| 0.0228 | 4.0 | 13500 | 0.0287 | 0.9553 | 0.9450 | 0.9501 | 0.9886 |
| 0.0169 | 4.08 | 13800 | 0.0313 | 0.9563 | 0.9426 | 0.9494 | 0.9886 |
| 0.0169 | 4.17 | 14100 | 0.0311 | 0.9564 | 0.9434 | 0.9499 | 0.9887 |
| 0.0167 | 4.26 | 14400 | 0.0305 | 0.9562 | 0.9478 | 0.9520 | 0.9889 |
| 0.0166 | 4.35 | 14700 | 0.0304 | 0.9549 | 0.9478 | 0.9513 | 0.9890 |
| 0.0165 | 4.44 | 15000 | 0.0296 | 0.9579 | 0.9453 | 0.9516 | 0.9890 |
| 0.0162 | 4.53 | 15300 | 0.0295 | 0.9562 | 0.9492 | 0.9527 | 0.9892 |
| 0.0158 | 4.62 | 15600 | 0.0291 | 0.9563 | 0.9483 | 0.9523 | 0.9892 |
| 0.0157 | 4.71 | 15900 | 0.0288 | 0.9575 | 0.9505 | 0.9540 | 0.9894 |
| 0.0153 | 4.79 | 16200 | 0.0293 | 0.9580 | 0.9472 | 0.9526 | 0.9892 |
| 0.0152 | 4.88 | 16500 | 0.0292 | 0.9581 | 0.9476 | 0.9528 | 0.9893 |
| 0.0152 | 4.97 | 16800 | 0.0292 | 0.9583 | 0.9476 | 0.9529 | 0.9893 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.0a0+6a974be
- Datasets 2.18.0
- Tokenizers 0.15.2
|
adasgaleus/BIM-0.5 | adasgaleus | 2024-03-28T00:33:50Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-03-28T00:33:26Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 20240327180321_slow_hinton
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240327180321_slow_hinton
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0488
- Precision: 0.9507
- Recall: 0.9581
- F1: 0.9544
- Accuracy: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.095 | 0.09 | 300 | 0.0845 | 0.9071 | 0.9202 | 0.9136 | 0.9668 |
| 0.0884 | 0.18 | 600 | 0.0782 | 0.9112 | 0.9274 | 0.9192 | 0.9689 |
| 0.0861 | 0.26 | 900 | 0.0761 | 0.9139 | 0.9294 | 0.9215 | 0.9698 |
| 0.082 | 0.35 | 1200 | 0.0742 | 0.9171 | 0.9322 | 0.9246 | 0.9711 |
| 0.0794 | 0.44 | 1500 | 0.0708 | 0.9229 | 0.9330 | 0.9279 | 0.9725 |
| 0.0788 | 0.53 | 1800 | 0.0699 | 0.9239 | 0.9339 | 0.9289 | 0.9729 |
| 0.078 | 0.62 | 2100 | 0.0701 | 0.9224 | 0.9339 | 0.9281 | 0.9726 |
| 0.0785 | 0.71 | 2400 | 0.0698 | 0.9278 | 0.9286 | 0.9282 | 0.9727 |
| 0.0768 | 0.79 | 2700 | 0.0686 | 0.9285 | 0.9326 | 0.9306 | 0.9736 |
| 0.0764 | 0.88 | 3000 | 0.0694 | 0.9166 | 0.9418 | 0.9290 | 0.9727 |
| 0.0754 | 0.97 | 3300 | 0.0674 | 0.9289 | 0.9341 | 0.9315 | 0.9740 |
| 0.0687 | 1.06 | 3600 | 0.0665 | 0.9304 | 0.9359 | 0.9332 | 0.9746 |
| 0.0697 | 1.15 | 3900 | 0.0664 | 0.9256 | 0.9410 | 0.9332 | 0.9744 |
| 0.0682 | 1.24 | 4200 | 0.0651 | 0.9258 | 0.9418 | 0.9337 | 0.9746 |
| 0.0679 | 1.32 | 4500 | 0.0637 | 0.9296 | 0.9425 | 0.9360 | 0.9757 |
| 0.0685 | 1.41 | 4800 | 0.0640 | 0.9288 | 0.9428 | 0.9357 | 0.9755 |
| 0.0662 | 1.5 | 5100 | 0.0627 | 0.9336 | 0.9394 | 0.9365 | 0.9760 |
| 0.0655 | 1.59 | 5400 | 0.0617 | 0.9334 | 0.9422 | 0.9378 | 0.9764 |
| 0.0656 | 1.68 | 5700 | 0.0621 | 0.9298 | 0.9458 | 0.9377 | 0.9763 |
| 0.065 | 1.77 | 6000 | 0.0610 | 0.9352 | 0.9419 | 0.9386 | 0.9768 |
| 0.0647 | 1.85 | 6300 | 0.0597 | 0.9341 | 0.9465 | 0.9403 | 0.9774 |
| 0.0629 | 1.94 | 6600 | 0.0591 | 0.9342 | 0.9457 | 0.9399 | 0.9772 |
| 0.0557 | 2.03 | 6900 | 0.0592 | 0.9375 | 0.9455 | 0.9415 | 0.9779 |
| 0.0563 | 2.12 | 7200 | 0.0598 | 0.9355 | 0.9454 | 0.9404 | 0.9774 |
| 0.0564 | 2.21 | 7500 | 0.0573 | 0.9375 | 0.9483 | 0.9428 | 0.9783 |
| 0.0574 | 2.3 | 7800 | 0.0571 | 0.9368 | 0.9490 | 0.9429 | 0.9783 |
| 0.0564 | 2.38 | 8100 | 0.0578 | 0.9375 | 0.9482 | 0.9428 | 0.9783 |
| 0.0553 | 2.47 | 8400 | 0.0574 | 0.9387 | 0.9472 | 0.9429 | 0.9785 |
| 0.0557 | 2.56 | 8700 | 0.0564 | 0.9378 | 0.9505 | 0.9441 | 0.9788 |
| 0.0554 | 2.65 | 9000 | 0.0557 | 0.9410 | 0.9472 | 0.9441 | 0.9789 |
| 0.0542 | 2.74 | 9300 | 0.0545 | 0.9409 | 0.9516 | 0.9462 | 0.9796 |
| 0.0533 | 2.83 | 9600 | 0.0540 | 0.9430 | 0.9501 | 0.9465 | 0.9799 |
| 0.0523 | 2.91 | 9900 | 0.0538 | 0.9388 | 0.9523 | 0.9455 | 0.9794 |
| 0.0509 | 3.0 | 10200 | 0.0547 | 0.9430 | 0.9503 | 0.9466 | 0.9798 |
| 0.0459 | 3.09 | 10500 | 0.0538 | 0.9428 | 0.9512 | 0.9470 | 0.9801 |
| 0.0443 | 3.18 | 10800 | 0.0549 | 0.9438 | 0.9496 | 0.9467 | 0.9800 |
| 0.0458 | 3.27 | 11100 | 0.0536 | 0.9440 | 0.9516 | 0.9478 | 0.9804 |
| 0.0445 | 3.36 | 11400 | 0.0523 | 0.9451 | 0.9509 | 0.9480 | 0.9805 |
| 0.0449 | 3.44 | 11700 | 0.0513 | 0.9453 | 0.9527 | 0.9490 | 0.9808 |
| 0.0442 | 3.53 | 12000 | 0.0518 | 0.9477 | 0.9513 | 0.9495 | 0.9811 |
| 0.0441 | 3.62 | 12300 | 0.0511 | 0.9447 | 0.9551 | 0.9499 | 0.9811 |
| 0.0439 | 3.71 | 12600 | 0.0503 | 0.9465 | 0.9556 | 0.9510 | 0.9815 |
| 0.0442 | 3.8 | 12900 | 0.0502 | 0.9466 | 0.9538 | 0.9502 | 0.9813 |
| 0.0431 | 3.88 | 13200 | 0.0503 | 0.9473 | 0.9549 | 0.9511 | 0.9817 |
| 0.0429 | 3.97 | 13500 | 0.0491 | 0.9473 | 0.9559 | 0.9516 | 0.9819 |
| 0.0356 | 4.06 | 13800 | 0.0522 | 0.9465 | 0.9566 | 0.9515 | 0.9818 |
| 0.0354 | 4.15 | 14100 | 0.0518 | 0.9489 | 0.9560 | 0.9524 | 0.9822 |
| 0.0357 | 4.24 | 14400 | 0.0509 | 0.9485 | 0.9565 | 0.9525 | 0.9822 |
| 0.0353 | 4.33 | 14700 | 0.0507 | 0.9492 | 0.9563 | 0.9527 | 0.9823 |
| 0.0352 | 4.41 | 15000 | 0.0498 | 0.9497 | 0.9572 | 0.9534 | 0.9826 |
| 0.0352 | 4.5 | 15300 | 0.0492 | 0.9496 | 0.9577 | 0.9536 | 0.9826 |
| 0.0341 | 4.59 | 15600 | 0.0493 | 0.9494 | 0.9583 | 0.9538 | 0.9827 |
| 0.034 | 4.68 | 15900 | 0.0495 | 0.9504 | 0.9576 | 0.9540 | 0.9828 |
| 0.0334 | 4.77 | 16200 | 0.0493 | 0.9501 | 0.9584 | 0.9542 | 0.9829 |
| 0.0335 | 4.86 | 16500 | 0.0493 | 0.9509 | 0.9574 | 0.9541 | 0.9828 |
| 0.0338 | 4.94 | 16800 | 0.0488 | 0.9507 | 0.9581 | 0.9544 | 0.9830 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.0a0+6a974be
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Tongjilibo/MiniLLM-1.1B-WithWudao-SFT | Tongjilibo | 2024-03-28T00:32:55Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-25T14:44:06Z | ---
license: apache-2.0
---
# 0. 说明
- 此权重为使用[build_MiniLLM_from_scratch](https://github.com/Tongjilibo/build_MiniLLM_from_scratch)训练得到的
- 此权重为[zR-Llama-1b-ChatGLM2-6b-tokenizer](https://huggingface.co/zRzRzRzRzRzRzR/zR-Llama-1b-ChatGLM2-6b-tokenizer)的镜像权重,两者是一致的,任意下载即可
## 1. 介绍
- **初衷**:本项目旨在构建一个小参数量的llm,完整走完`预训练` -> `指令微调` -> `奖励模型` -> `强化学习` 四个阶段,以可控的成本完成一个可以完成简单聊天任务的chat模型
- **特色**:
- 使用[bert4torch](https://github.com/Tongjilibo/bert4torch)训练框架,代码简洁高效;
- 训练的checkpoint可以直接使用`transformers`包进行推理
- 优化了训练时候内存占用;
- 提供了完整训练log供复现比对
- **声明**: 本实验训练出来的模型,目前只具备简单的聊天功能(受限于语料大小、模型规模、sft语料大小和质量),不具备回答复杂问题的能力。
## 2. 快速开始
- 环境安装
```shell
pip install bert4torch==0.4.9.post2 # 若找不到则指定 -i https://pypi.org/simple
```
- 脚本说明
```shell
# 为防止terminal关闭,可以使用nohup, tmux, screen方式来启动
# eg. nohup torchrun --standalone --nproc_per_node=4 pretrain.py --name baby > nohup.log&
# 预训练
cd pretrain
torchrun --standalone --nproc_per_node=4 pretrain.py # 部分反映ddp训到一般会崩,需设置`export NCCL_IB_DISABLE=1`
# 预训练推理(命令行聊天)
cd pretrain
python infer.py # python infer_transformers.py
# 指令微调训练
cd sft
python sft.py
# 指令微调推理(命令行聊天)
cd sft
python infer.py # python infer_transformers.py
# 把ckpt转化成transformers可以运行的格式
cd docs
python convert.py
```
## 3. 更新历史
- **20240316**: 初始提交,预训练模型`MiniLLM-MiniLLM-L12_H1024_A8-NoWudao`和`MiniLLM-MiniLLM-L12_H1024_A8-WithWudao`; SFT模型`MiniLLM-L12_H1024_A8-WithWudao-SFT_Alpaca`
## 4. 预训练
### 4.1 预训练语料(源于[baby-llama2-chinese](https://github.com/DLLXW/baby-llama2-chinese))
| 中文预训练语料 | 描述 |
|-------------------------|----------------------------------------|
| [Wiki中文百科](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)| 中文Wikipedia的数据 |
| [BaiduBaiKe](https://pan.baidu.com/s/1jIpCHnWLTNYabftavo3DVw?pwd=bwvb) 提取码: bwvb| 中文BaiduBaiKe的数据|
| [C4_zh:part1](https://pan.baidu.com/s/18O2Tj_PPB718K8gnaWrWUQ) 提取码:zv4r;[C4_zh:part2](https://pan.baidu.com/s/11PTgtUfFXvpNkOige9Iw4w) 提取码:sb83;[C4_zh:part3](https://pan.baidu.com/s/1248QfTS8QHPojYW-0fd5jQ) 提取码:l89d | C4是可用的最大语言数据集之一,收集了来自互联网上超过3.65亿个域的超过1560亿个token。C4_zh是其中的一部分 |
| [WuDaoCorpora](https://data.baai.ac.cn/details/WuDaoCorporaText) | 中文悟道开源的200G数据|
| [shibing624/medical](https://huggingface.co/datasets/shibing624/medical/tree/main)| 源自shibing624的一部分医学领域的预训练数据 |
项目开源了经过ChatGLM2-6B的分词器处理后的预训练语料,共计**634亿Tokens**的数据量,链接如下:[Corpus](https://pan.baidu.com/s/18o4gF-G68qfgOGWQXgAg3g) 提取码:6unr。
### 4.2 预训练权重
|预训练权重 | 预训练语料 | 下载地址 |
|----------------------------|--------------------------|---------------------|
| MiniLLM-L12_H1024_A8-NoWudao | (140亿 Tokens)<br/>Wiki中文百科、BaiduBaiKe、hibing624/medical、C4_zh | [百度网盘](https://pan.baidu.com/s/1ixjSR3IW9YXRhQ08RX-lMQ?pwd=lrj5), [HuggingFace](https://huggingface.co/Tongjilibo/MiniLLM-L12_H1024_A8-NoWudao)|
| MiniLLM-L12_H1024_A8-WithWudao | (640亿 Tokens)<br/>Wiki中文百科、BaiduBaiKe、shibing624/medical、C4_zh、WuDaoCorpora | [百度网盘](https://pan.baidu.com/s/1ixjSR3IW9YXRhQ08RX-lMQ?pwd=lrj5), [HuggingFace](https://huggingface.co/Tongjilibo/MiniLLM-L12_H1024_A8-WithWudao)|
### 4.3 预训练过程
- 训练参数配置和训练时长
| 权重 | 预训练设置 | 硬件占用和训练时长 |
|-------------------------------|--------------------------|---------------------|
| MiniLLM-L12_H1024_A8-NoWudao |140亿 Tokens; btz=32*4gpu; lr=3e-4; warmup_steps=5000 | 4×A800(80G), 单卡占用约60G,耗时20h|
| MiniLLM-L12_H1024_A8-WithWudao|640亿 Tokens; btz=32*4gpu; lr=1.5e-4; warmup_steps=5000 |✅ 4×A800(80G), 单卡占用约60G,耗时3.79d<br/>✅ baby-llama2项目2×4090,耗时26d<br/>✅ 个人测试单卡btz=8下, gpu占用约17G,时长未知(可配合梯度累计进一步降低占用)|
### 4.4 预训练模型调用
```python
# 以下两句视网络情况添加
import os
os.environ['HF_ENDPOINT'] = "https://hf-mirror.com"
from transformers import AutoTokenizer, LlamaForCausalLM
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model_name = 'Tongjilibo/MiniLLM-L12_H1024_A8-WithWudao' # 'Tongjilibo/MiniLLM-L12_H1024_A8-NoWudao'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(model_name).to(device)
query = '王鹏是一名'
inputs = tokenizer.encode(query, return_tensors='pt', add_special_tokens=False).to(device)
output_ids = model.generate(inputs)
response = tokenizer.decode(output_ids[0].cpu(), skip_special_tokens=True)
print(response)
```
### 4.5 预训练续写效果
- MiniLLM-L12_H1024_A8-NoWudao
```shell
用户:小明学习优异、身体健康、是一名
续写:小明学习优异、身体健康、是一名优秀的大连理工大学学生,从大学一年级起,他就对体育十分痴迷,成绩名列前茅,还参加了多项比赛,在全校体育活动中,他成绩名列前茅,被大连理工大学授予“优秀运动员”称号。
2006年,高中二年级,小明的同学杨帆来到大连理工大学,他决心要改变自己的命运。他每天放学后,都会跑到操场上练习篮球,练到下午三点多钟时,小明的篮球技术开始显现出来,他的篮球动作敏捷,力量大,而且,他经常利用空档时间练习跳投
用户:床前明月光,疑是地上霜。举头望明月,
续写:床前明月光,疑是地上霜。举头望明月,低头思故乡……”
我读着这些诗句时也倍感亲切和亲切了!
```
- MiniLLM-L12_H1024_A8-WithWudao
```shell
用户:小明学习优异、身体健康、是一名
续写:小明学习优异、身体健康、是一名优秀的教师。他为人诚实、有责任心。在课堂上,他能认真听讲,积极思考,对每节课都积极举手发言。在课后,他主动要求帮助老师,认真做好作业,按时完成老师布置的作业,受到同学们的好评。
用户:床前明月光,疑是地上霜。举头望明月,
续写:床前明月光,疑是地上霜。举头望明月,低头思故乡。”这诗不仅描述了中秋夜月夜的温馨与宁静,还写出了作者对故土深情的眷恋和思乡之情。“月上柳梢头”一语,是写月下所见。“欲将心事付瑶琴”,指欲诉别情; “举头望明月”,写中秋之夜,月上高挂、皓月当空、群星闪耀的景象;“低头思故乡”,写思念故土的深情厚意。
这首诗在写作手法上,主要运用象征
```
## 5、指令微调
### 5.1 指令微调语料(筛选的可用数据集)
| 数据集名称 | 介绍 |
| ---------------- | -------------------- |
|[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)|参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条|
|[BelleGroup/Belle-0.5M-cn](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)|包含约50万条由BELLE项目生成的中文指令数据||
|[BelleGroup/Belle-1M-cn](https://huggingface.co/datasets/BelleGroup/train_1M_CN)| 包含约100万条由BELLE项目生成的中文指令数据|
|[BelleGroup/Belle-school_math_0.25M](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)| Belle开放的0.25M数学指令数据集|
|[BelleGroup/Belle-multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)| Belle开放的0.8M多轮任务对话数据集|
|[YeungNLP/firefly-train-1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)|流萤23种常见的中文NLP任务的数据,并且构造了许多与中华文化相关的数据,如对联、作诗、文言文翻译、散文、金庸小说等。对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万|
|[fnlp/moss-002-sft-data](https://huggingface.co/datasets/fnlp/moss-002-sft-data)|MOSS-002所使用的多轮对话数据,覆盖有用性、忠实性、无害性三个层面,包含由text-davinci-003生成的约57万条英文对话和59万条中文对话|
|[fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data)|moss-moon-003-sft所使用的多轮对话数据,基于MOSS-002内测阶段采集的约10万用户输入数据和gpt-3.5-turbo构造而成,相比moss-002-sft-data,moss-003-sft-data更加符合真实用户意图分布,包含更细粒度的有用性类别标记、更广泛的无害性数据和更长对话轮数,约含110万条对话数据|
|[shareAI/CodeChat](https://huggingface.co/datasets/shareAI/CodeChat) | 主要包含逻辑推理、代码问答、代码生成相关语料样本。 |
|[shareAI/ShareGPT-Chinese-English-90k](https://huggingface.co/datasets/shareAI/ShareGPT-Chinese-English-90k) | 中英文平行双语优质人机问答数据集,覆盖真实复杂场景下的用户提问。|
|[deepctrl/deepctrl-sft-data](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data/summary)|匠数大模型SFT数据集是一个由匠数科技精心搜集整理的高质量数据集,包含10M条数据的中文数据集和包含2M条数据的英文数据集|
### 5.2 指令微调权重
|指令微调权重 | 语料 | 下载地址 |
|----------------------------|-------------------------|--------------------------|
| MiniLLM-L12_H1024_A8-WithWudao-SFT_Alpaca| [shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh) | [百度网盘](https://pan.baidu.com/s/1ixjSR3IW9YXRhQ08RX-lMQ?pwd=lrj5), [HuggingFace](https://huggingface.co/Tongjilibo/MiniLLM-L12_H1024_A8-WithWudao-SFT_Alpaca)|
### 5.3 指令微调训练过程
- 训练参数配置和训练时长
| 权重 | 预训练设置 | 硬件占用和训练时长 |
|-------------------------------|--------------------------|---------------------|
| MiniLLM-L12_H1024_A8-NoWudao |[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)数据集; btz=8; lr=2e-5; 5epoch | 单卡4090,显存17G, 耗时45min|
### 5.4 指令微调模型调用
```python
# 以下两句视网络情况添加
import os
os.environ['HF_ENDPOINT'] = "https://hf-mirror.com"
from transformers import AutoTokenizer, LlamaForCausalLM
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model_name = 'Tongjilibo/MiniLLM-L12_H1024_A8-WithWudao-SFT_Alpaca'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(model_name).to(device)
query = '你好'
query = f'<human>{query}<robot>'
inputs = tokenizer.encode(query, return_tensors='pt', add_special_tokens=False).to(device)
output_ids = model.generate(inputs)
response = tokenizer.decode(output_ids[0].cpu(), skip_special_tokens=True)[len(query):]
print(response)
```
### 5.5 指令微调Chat效果
- MiniLLM-L12_H1024_A8-WithWudao-SFT_Alpaca
```shell
User:你好
Assistant:你好!很高兴见到你。
User:你能做什么?
Assistant:你好!我是一个人工智能助手,有什么我可以为你做的吗?
User:番茄炒蛋怎么制作?
Assistant:如果你想要制作一个番茄炒蛋,那么下面这些步骤可能会帮助你:
1. 准备食材:准备好鸡蛋、盐、香菜、胡萝卜丝、黄瓜丝等食材。
2. 清洗鸡蛋:在搅拌碗中打散鸡蛋,使其表面可以清洁。
3. 准备材料:准备一个锅,倒入适量的水,将鸡蛋液倒入锅中。
4. 煮鸡蛋:用汤锅或锅煎至鸡蛋表面金黄色,熟透即可。
5. 炒蔬菜:在锅中加入适量的油,将胡萝卜丝和黄瓜丝个人喜欢的,翻炒几下,直到胡萝卜熟透。
6. 加入鸡蛋:从锅中取出鸡蛋,倒入锅中。
7. 调味:将炒好的鸡蛋倒入锅中,与蔬菜一起翻炒几下即可。
注意:想要炒出更丰富的蔬菜味道,可以适当加入适量的盐和香菜,增加风味。
```
|
Changgil/K2S3-Mistral-7b-v1.2 | Changgil | 2024-03-28T00:31:17Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:24:04Z | ---
license: cc-by-nc-4.0
language:
- en
- ko
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.2
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
ke-lly/sft_openassistant-guanaco | ke-lly | 2024-03-28T00:19:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
]
| null | 2024-02-27T22:25:59Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: openai-community/gpt2
datasets:
- generator
metrics:
- accuracy
model-index:
- name: sft_openassistant-guanaco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_openassistant-guanaco
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1064
- Accuracy: 0.0190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7182 | 0.01 | 3 | 3.1069 | 0.0190 |
| 3.3356 | 0.02 | 6 | 3.1068 | 0.0190 |
| 3.2158 | 0.03 | 9 | 3.1066 | 0.0190 |
| 3.4288 | 0.04 | 12 | 3.1065 | 0.0190 |
| 3.5899 | 0.04 | 15 | 3.1064 | 0.0190 |
| 3.6934 | 0.05 | 18 | 3.1064 | 0.0190 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.37.0
- Pytorch 2.0.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1 |
lunarsylph/stablecell_v6 | lunarsylph | 2024-03-28T00:07:06Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:02:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lewdiculous/mistral-7b-v0.2-layla-v4-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-27T23:49:30Z | 42 | 3 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-27T23:05:49Z | ---
license: apache-2.0
---
# #Roleplay
This card will be updated with more information later.
This repo contains GGUF-IQ-Imatrix quants for: <br> https://huggingface.co/l3utterfly/mistral-7b-v0.2-layla-v4
 |
gaodrew/mysterious-bouncy-flan-2 | gaodrew | 2024-03-27T23:49:18Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T23:48:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AzalKhan/vicuna_ft_dpo_fin | AzalKhan | 2024-03-27T23:46:12Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-03-27T23:43:26Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gusanito1/xDs | Gusanito1 | 2024-03-27T23:45:25Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-27T23:42:18Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/V0328MP4 | Litzy619 | 2024-03-27T23:16:30Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-03-27T21:29:38Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0328MP4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0328MP4
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1408 | 0.09 | 10 | 2.5415 |
| 5.4886 | 0.18 | 20 | 2.4963 |
| 4.5457 | 0.27 | 30 | 2.4110 |
| 4.1074 | 0.36 | 40 | 2.3242 |
| 3.5825 | 0.45 | 50 | 2.2528 |
| 3.1612 | 0.54 | 60 | 2.2006 |
| 2.8782 | 0.63 | 70 | 2.1606 |
| 2.5962 | 0.73 | 80 | 2.1360 |
| 2.7051 | 0.82 | 90 | 2.1230 |
| 2.5853 | 0.91 | 100 | 2.1162 |
| 2.6212 | 1.0 | 110 | 2.1140 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ThomasComics/Noro-Hermes-7B | ThomasComics | 2024-03-27T23:03:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NeverSleep/Noromaid-7B-0.4-DPO",
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:merge:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:merge:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T22:44:02Z | ---
tags:
- merge
- mergekit
- lazymergekit
- NeverSleep/Noromaid-7B-0.4-DPO
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
base_model:
- NeverSleep/Noromaid-7B-0.4-DPO
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
---
# Noro-Hermes-7B
Noro-Hermes-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NeverSleep/Noromaid-7B-0.4-DPO
layer_range: [0, 32]
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ThomasComics/Noro-Hermes-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
0x0son0/nr_111 | 0x0son0 | 2024-03-27T22:58:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T22:06:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Madao-314/dqn-SpaceInvadersNoFrameskip-v4 | Madao-314 | 2024-03-27T22:56:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-27T22:56:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 675.00 +/- 220.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Madao-314 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Madao-314 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Madao-314
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
alikanakar/bert-base-multilingual-cased-D-E | alikanakar | 2024-03-27T22:56:33Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-03-27T19:59:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-D-E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-D-E
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5453 | 1.0 | 2505 | 1.2072 |
| 1.18 | 2.0 | 5010 | 1.0775 |
| 0.8819 | 3.0 | 7515 | 1.0374 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.2.1+cu121
- Datasets 2.9.0
- Tokenizers 0.13.3
|
rk68/mistral-7b-LL144 | rk68 | 2024-03-27T22:53:55Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-27T22:52:20Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on wu981526092/LL144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
sanchit-gandhi/distil-large-v3-hi-ft-frozen-encoder | sanchit-gandhi | 2024-03-27T22:47:54Z | 23 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:distil-whisper/distil-large-v3",
"base_model:finetune:distil-whisper/distil-large-v3",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-27T12:09:14Z | ---
license: mit
base_model: distil-whisper/distil-large-v3
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_1
metrics:
- wer
model-index:
- name: distil-whisper/distil-large-v3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_1 hi
type: mozilla-foundation/common_voice_16_1
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 0.26639882562002626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-whisper/distil-large-v3
This model is a fine-tuned version of [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) on the mozilla-foundation/common_voice_16_1 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3749
- Wer: 0.2664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1035 | 4.5 | 1000 | 0.3015 | 0.3250 |
| 0.0165 | 9.01 | 2000 | 0.3496 | 0.3007 |
| 0.0022 | 13.51 | 3000 | 0.3649 | 0.2786 |
| 0.0011 | 18.02 | 4000 | 0.3700 | 0.2681 |
| 0.0003 | 22.52 | 5000 | 0.3749 | 0.2664 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
|
gaodrew/mysterious-bouncy-flan | gaodrew | 2024-03-27T22:45:13Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-27T22:44:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iadithyan/splitter_mistral_7b_merged_16bit | iadithyan | 2024-03-27T22:45:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-27T22:44:33Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wisdominanutshell/splitter_mistral_7b_adapter | wisdominanutshell | 2024-03-27T22:44:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-27T22:44:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits